00:00:00.000 Started by upstream project "autotest-per-patch" build number 132701 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.027 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.028 The recommended git tool is: git 00:00:00.028 using credential 00000000-0000-0000-0000-000000000002 00:00:00.030 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.047 Fetching changes from the remote Git repository 00:00:00.051 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.078 Using shallow fetch with depth 1 00:00:00.078 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.078 > git --version # timeout=10 00:00:00.107 > git --version # 'git version 2.39.2' 00:00:00.107 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.134 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.134 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.756 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.771 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.786 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.786 > git config core.sparsecheckout # timeout=10 00:00:04.801 > git read-tree -mu HEAD # timeout=10 00:00:04.820 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.845 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.846 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.934 [Pipeline] Start of Pipeline 00:00:04.945 [Pipeline] library 00:00:04.947 Loading library shm_lib@master 00:00:04.947 Library shm_lib@master is cached. Copying from home. 00:00:04.959 [Pipeline] node 00:00:04.970 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_3 00:00:04.972 [Pipeline] { 00:00:04.981 [Pipeline] catchError 00:00:04.982 [Pipeline] { 00:00:04.994 [Pipeline] wrap 00:00:05.002 [Pipeline] { 00:00:05.011 [Pipeline] stage 00:00:05.013 [Pipeline] { (Prologue) 00:00:05.028 [Pipeline] echo 00:00:05.029 Node: VM-host-SM38 00:00:05.033 [Pipeline] cleanWs 00:00:05.043 [WS-CLEANUP] Deleting project workspace... 00:00:05.043 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.051 [WS-CLEANUP] done 00:00:05.245 [Pipeline] setCustomBuildProperty 00:00:05.332 [Pipeline] httpRequest 00:00:06.080 [Pipeline] echo 00:00:06.082 Sorcerer 10.211.164.20 is alive 00:00:06.091 [Pipeline] retry 00:00:06.093 [Pipeline] { 00:00:06.107 [Pipeline] httpRequest 00:00:06.175 HttpMethod: GET 00:00:06.176 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.176 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.177 Response Code: HTTP/1.1 200 OK 00:00:06.177 Success: Status code 200 is in the accepted range: 200,404 00:00:06.178 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.330 [Pipeline] } 00:00:14.350 [Pipeline] // retry 00:00:14.357 [Pipeline] sh 00:00:14.646 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.664 [Pipeline] httpRequest 00:00:15.217 [Pipeline] echo 00:00:15.219 Sorcerer 10.211.164.20 is alive 00:00:15.230 [Pipeline] retry 00:00:15.232 [Pipeline] { 00:00:15.247 [Pipeline] httpRequest 00:00:15.253 HttpMethod: GET 00:00:15.254 URL: http://10.211.164.20/packages/spdk_e2dfdf06ccdc94d5ea8e4f51a307cd016e6a6875.tar.gz 00:00:15.254 Sending request to url: http://10.211.164.20/packages/spdk_e2dfdf06ccdc94d5ea8e4f51a307cd016e6a6875.tar.gz 00:00:15.280 Response Code: HTTP/1.1 200 OK 00:00:15.281 Success: Status code 200 is in the accepted range: 200,404 00:00:15.282 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_e2dfdf06ccdc94d5ea8e4f51a307cd016e6a6875.tar.gz 00:02:21.505 [Pipeline] } 00:02:21.522 [Pipeline] // retry 00:02:21.532 [Pipeline] sh 00:02:21.816 + tar --no-same-owner -xf spdk_e2dfdf06ccdc94d5ea8e4f51a307cd016e6a6875.tar.gz 00:02:25.126 [Pipeline] sh 00:02:25.410 + git -C spdk log --oneline -n5 00:02:25.410 e2dfdf06c accel/mlx5: Register post_poller handler 00:02:25.410 3c8001115 accel/mlx5: More precise condition to update DB 00:02:25.410 98eca6fa0 lib/thread: Add API to register a post poller handler 00:02:25.410 2c140f58f nvme/rdma: Support accel sequence 00:02:25.410 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:02:25.430 [Pipeline] writeFile 00:02:25.445 [Pipeline] sh 00:02:25.730 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:25.762 [Pipeline] sh 00:02:26.141 + cat autorun-spdk.conf 00:02:26.141 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.141 SPDK_TEST_NVME=1 00:02:26.141 SPDK_TEST_FTL=1 00:02:26.141 SPDK_TEST_ISAL=1 00:02:26.141 SPDK_RUN_ASAN=1 00:02:26.141 SPDK_RUN_UBSAN=1 00:02:26.141 SPDK_TEST_XNVME=1 00:02:26.141 SPDK_TEST_NVME_FDP=1 00:02:26.141 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.149 RUN_NIGHTLY=0 00:02:26.151 [Pipeline] } 00:02:26.164 [Pipeline] // stage 00:02:26.177 [Pipeline] stage 00:02:26.179 [Pipeline] { (Run VM) 00:02:26.192 [Pipeline] sh 00:02:26.475 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:26.475 + echo 'Start stage prepare_nvme.sh' 00:02:26.475 Start stage prepare_nvme.sh 00:02:26.475 + [[ -n 1 ]] 00:02:26.475 + disk_prefix=ex1 00:02:26.475 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:02:26.475 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:02:26.475 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:02:26.475 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.475 ++ SPDK_TEST_NVME=1 00:02:26.475 ++ SPDK_TEST_FTL=1 00:02:26.475 ++ SPDK_TEST_ISAL=1 00:02:26.475 ++ SPDK_RUN_ASAN=1 00:02:26.475 ++ SPDK_RUN_UBSAN=1 00:02:26.475 ++ SPDK_TEST_XNVME=1 00:02:26.475 ++ SPDK_TEST_NVME_FDP=1 00:02:26.475 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.475 ++ RUN_NIGHTLY=0 00:02:26.475 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:02:26.475 + nvme_files=() 00:02:26.475 + declare -A nvme_files 00:02:26.475 + backend_dir=/var/lib/libvirt/images/backends 00:02:26.475 + nvme_files['nvme.img']=5G 00:02:26.475 + nvme_files['nvme-cmb.img']=5G 00:02:26.475 + nvme_files['nvme-multi0.img']=4G 00:02:26.475 + nvme_files['nvme-multi1.img']=4G 00:02:26.475 + nvme_files['nvme-multi2.img']=4G 00:02:26.475 + nvme_files['nvme-openstack.img']=8G 00:02:26.475 + nvme_files['nvme-zns.img']=5G 00:02:26.475 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:26.475 + (( SPDK_TEST_FTL == 1 )) 00:02:26.475 + nvme_files["nvme-ftl.img"]=6G 00:02:26.475 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:26.475 + nvme_files["nvme-fdp.img"]=1G 00:02:26.475 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:26.475 + for nvme in "${!nvme_files[@]}" 00:02:26.475 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:26.475 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:26.475 + for nvme in "${!nvme_files[@]}" 00:02:26.475 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:02:26.475 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:26.475 + for nvme in "${!nvme_files[@]}" 00:02:26.475 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:26.736 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:26.736 + for nvme in "${!nvme_files[@]}" 00:02:26.736 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:26.736 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:26.998 + for nvme in "${!nvme_files[@]}" 00:02:26.998 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:26.998 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:26.998 + for nvme in "${!nvme_files[@]}" 00:02:26.998 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:26.998 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:26.998 + for nvme in "${!nvme_files[@]}" 00:02:26.998 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:27.260 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:27.260 + for nvme in "${!nvme_files[@]}" 00:02:27.260 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:02:27.260 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:27.260 + for nvme in "${!nvme_files[@]}" 00:02:27.260 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:28.205 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:28.205 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:28.205 + echo 'End stage prepare_nvme.sh' 00:02:28.205 End stage prepare_nvme.sh 00:02:28.218 [Pipeline] sh 00:02:28.502 + DISTRO=fedora39 00:02:28.502 + CPUS=10 00:02:28.502 + RAM=12288 00:02:28.502 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:28.502 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:28.502 00:02:28.502 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:02:28.502 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:02:28.502 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:02:28.502 HELP=0 00:02:28.502 DRY_RUN=0 00:02:28.502 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:02:28.502 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:28.502 NVME_AUTO_CREATE=0 00:02:28.502 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:02:28.502 NVME_CMB=,,,, 00:02:28.502 NVME_PMR=,,,, 00:02:28.502 NVME_ZNS=,,,, 00:02:28.502 NVME_MS=true,,,, 00:02:28.502 NVME_FDP=,,,on, 00:02:28.502 SPDK_VAGRANT_DISTRO=fedora39 00:02:28.502 SPDK_VAGRANT_VMCPU=10 00:02:28.502 SPDK_VAGRANT_VMRAM=12288 00:02:28.502 SPDK_VAGRANT_PROVIDER=libvirt 00:02:28.502 SPDK_VAGRANT_HTTP_PROXY= 00:02:28.502 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:28.502 SPDK_OPENSTACK_NETWORK=0 00:02:28.502 VAGRANT_PACKAGE_BOX=0 00:02:28.502 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:02:28.502 FORCE_DISTRO=true 00:02:28.502 VAGRANT_BOX_VERSION= 00:02:28.502 EXTRA_VAGRANTFILES= 00:02:28.502 NIC_MODEL=e1000 00:02:28.502 00:02:28.502 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt' 00:02:28.502 /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:02:31.094 Bringing machine 'default' up with 'libvirt' provider... 00:02:31.668 ==> default: Creating image (snapshot of base box volume). 00:02:31.668 ==> default: Creating domain with the following settings... 00:02:31.668 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733426458_164f859ae107275a2d00 00:02:31.668 ==> default: -- Domain type: kvm 00:02:31.668 ==> default: -- Cpus: 10 00:02:31.668 ==> default: -- Feature: acpi 00:02:31.668 ==> default: -- Feature: apic 00:02:31.668 ==> default: -- Feature: pae 00:02:31.668 ==> default: -- Memory: 12288M 00:02:31.668 ==> default: -- Memory Backing: hugepages: 00:02:31.668 ==> default: -- Management MAC: 00:02:31.668 ==> default: -- Loader: 00:02:31.668 ==> default: -- Nvram: 00:02:31.668 ==> default: -- Base box: spdk/fedora39 00:02:31.668 ==> default: -- Storage pool: default 00:02:31.668 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733426458_164f859ae107275a2d00.img (20G) 00:02:31.668 ==> default: -- Volume Cache: default 00:02:31.668 ==> default: -- Kernel: 00:02:31.668 ==> default: -- Initrd: 00:02:31.668 ==> default: -- Graphics Type: vnc 00:02:31.668 ==> default: -- Graphics Port: -1 00:02:31.668 ==> default: -- Graphics IP: 127.0.0.1 00:02:31.668 ==> default: -- Graphics Password: Not defined 00:02:31.668 ==> default: -- Video Type: cirrus 00:02:31.668 ==> default: -- Video VRAM: 9216 00:02:31.668 ==> default: -- Sound Type: 00:02:31.668 ==> default: -- Keymap: en-us 00:02:31.668 ==> default: -- TPM Path: 00:02:31.668 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:31.668 ==> default: -- Command line args: 00:02:31.668 ==> default: -> value=-device, 00:02:31.668 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:31.668 ==> default: -> value=-drive, 00:02:31.668 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:31.668 ==> default: -> value=-device, 00:02:31.668 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:31.668 ==> default: -> value=-device, 00:02:31.668 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:31.668 ==> default: -> value=-drive, 00:02:31.668 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:02:31.668 ==> default: -> value=-device, 00:02:31.668 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:31.668 ==> default: -> value=-device, 00:02:31.668 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:31.668 ==> default: -> value=-drive, 00:02:31.668 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:31.668 ==> default: -> value=-device, 00:02:31.668 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:31.669 ==> default: -> value=-drive, 00:02:31.669 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:31.669 ==> default: -> value=-device, 00:02:31.669 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:31.669 ==> default: -> value=-drive, 00:02:31.931 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:31.931 ==> default: -> value=-device, 00:02:31.931 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:31.931 ==> default: -> value=-device, 00:02:31.931 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:31.931 ==> default: -> value=-device, 00:02:31.931 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:31.931 ==> default: -> value=-drive, 00:02:31.931 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:31.931 ==> default: -> value=-device, 00:02:31.931 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:31.931 ==> default: Creating shared folders metadata... 00:02:31.931 ==> default: Starting domain. 00:02:33.852 ==> default: Waiting for domain to get an IP address... 00:02:51.973 ==> default: Waiting for SSH to become available... 00:02:51.973 ==> default: Configuring and enabling network interfaces... 00:02:55.280 default: SSH address: 192.168.121.193:22 00:02:55.280 default: SSH username: vagrant 00:02:55.280 default: SSH auth method: private key 00:02:57.828 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:05.988 ==> default: Mounting SSHFS shared folder... 00:03:07.904 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:07.904 ==> default: Checking Mount.. 00:03:08.846 ==> default: Folder Successfully Mounted! 00:03:08.846 00:03:08.846 SUCCESS! 00:03:08.846 00:03:08.846 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:03:08.846 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:08.846 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:03:08.846 00:03:08.857 [Pipeline] } 00:03:08.872 [Pipeline] // stage 00:03:08.881 [Pipeline] dir 00:03:08.882 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt 00:03:08.884 [Pipeline] { 00:03:08.896 [Pipeline] catchError 00:03:08.899 [Pipeline] { 00:03:08.910 [Pipeline] sh 00:03:09.195 + vagrant ssh-config --host vagrant 00:03:09.195 + sed -ne '/^Host/,$p' 00:03:09.195 + tee ssh_conf 00:03:11.743 Host vagrant 00:03:11.743 HostName 192.168.121.193 00:03:11.743 User vagrant 00:03:11.743 Port 22 00:03:11.743 UserKnownHostsFile /dev/null 00:03:11.743 StrictHostKeyChecking no 00:03:11.743 PasswordAuthentication no 00:03:11.743 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:11.743 IdentitiesOnly yes 00:03:11.743 LogLevel FATAL 00:03:11.743 ForwardAgent yes 00:03:11.743 ForwardX11 yes 00:03:11.743 00:03:11.761 [Pipeline] withEnv 00:03:11.764 [Pipeline] { 00:03:11.779 [Pipeline] sh 00:03:12.109 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:03:12.110 source /etc/os-release 00:03:12.110 [[ -e /image.version ]] && img=$(< /image.version) 00:03:12.110 # Minimal, systemd-like check. 00:03:12.110 if [[ -e /.dockerenv ]]; then 00:03:12.110 # Clear garbage from the node'\''s name: 00:03:12.110 # agt-er_autotest_547-896 -> autotest_547-896 00:03:12.110 # $HOSTNAME is the actual container id 00:03:12.110 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:12.110 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:12.110 # We can assume this is a mount from a host where container is running, 00:03:12.110 # so fetch its hostname to easily identify the target swarm worker. 00:03:12.110 container="$(< /etc/hostname) ($agent)" 00:03:12.110 else 00:03:12.110 # Fallback 00:03:12.110 container=$agent 00:03:12.110 fi 00:03:12.110 fi 00:03:12.110 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:12.110 ' 00:03:12.135 [Pipeline] } 00:03:12.154 [Pipeline] // withEnv 00:03:12.164 [Pipeline] setCustomBuildProperty 00:03:12.182 [Pipeline] stage 00:03:12.184 [Pipeline] { (Tests) 00:03:12.202 [Pipeline] sh 00:03:12.490 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:12.767 [Pipeline] sh 00:03:13.052 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:13.330 [Pipeline] timeout 00:03:13.330 Timeout set to expire in 50 min 00:03:13.332 [Pipeline] { 00:03:13.347 [Pipeline] sh 00:03:13.631 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:03:14.204 HEAD is now at e2dfdf06c accel/mlx5: Register post_poller handler 00:03:14.218 [Pipeline] sh 00:03:14.505 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:03:14.782 [Pipeline] sh 00:03:15.066 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:15.342 [Pipeline] sh 00:03:15.625 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:03:15.885 ++ readlink -f spdk_repo 00:03:15.885 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:15.885 + [[ -n /home/vagrant/spdk_repo ]] 00:03:15.885 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:15.885 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:15.885 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:15.885 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:15.885 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:15.885 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:15.885 + cd /home/vagrant/spdk_repo 00:03:15.885 + source /etc/os-release 00:03:15.885 ++ NAME='Fedora Linux' 00:03:15.885 ++ VERSION='39 (Cloud Edition)' 00:03:15.885 ++ ID=fedora 00:03:15.885 ++ VERSION_ID=39 00:03:15.885 ++ VERSION_CODENAME= 00:03:15.885 ++ PLATFORM_ID=platform:f39 00:03:15.885 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:15.885 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:15.885 ++ LOGO=fedora-logo-icon 00:03:15.885 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:15.885 ++ HOME_URL=https://fedoraproject.org/ 00:03:15.885 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:15.885 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:15.885 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:15.885 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:15.885 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:15.885 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:15.885 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:15.885 ++ SUPPORT_END=2024-11-12 00:03:15.885 ++ VARIANT='Cloud Edition' 00:03:15.885 ++ VARIANT_ID=cloud 00:03:15.885 + uname -a 00:03:15.885 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:15.885 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:16.146 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:16.434 Hugepages 00:03:16.434 node hugesize free / total 00:03:16.434 node0 1048576kB 0 / 0 00:03:16.434 node0 2048kB 0 / 0 00:03:16.434 00:03:16.434 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:16.434 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:16.727 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:16.727 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:16.727 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:16.727 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:16.727 + rm -f /tmp/spdk-ld-path 00:03:16.727 + source autorun-spdk.conf 00:03:16.727 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:16.727 ++ SPDK_TEST_NVME=1 00:03:16.727 ++ SPDK_TEST_FTL=1 00:03:16.728 ++ SPDK_TEST_ISAL=1 00:03:16.728 ++ SPDK_RUN_ASAN=1 00:03:16.728 ++ SPDK_RUN_UBSAN=1 00:03:16.728 ++ SPDK_TEST_XNVME=1 00:03:16.728 ++ SPDK_TEST_NVME_FDP=1 00:03:16.728 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:16.728 ++ RUN_NIGHTLY=0 00:03:16.728 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:16.728 + [[ -n '' ]] 00:03:16.728 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:16.728 + for M in /var/spdk/build-*-manifest.txt 00:03:16.728 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:16.728 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:16.728 + for M in /var/spdk/build-*-manifest.txt 00:03:16.728 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:16.728 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:16.728 + for M in /var/spdk/build-*-manifest.txt 00:03:16.728 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:16.728 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:16.728 ++ uname 00:03:16.728 + [[ Linux == \L\i\n\u\x ]] 00:03:16.728 + sudo dmesg -T 00:03:16.728 + sudo dmesg --clear 00:03:16.728 + dmesg_pid=5029 00:03:16.728 + [[ Fedora Linux == FreeBSD ]] 00:03:16.728 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:16.728 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:16.728 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:16.728 + [[ -x /usr/src/fio-static/fio ]] 00:03:16.728 + sudo dmesg -Tw 00:03:16.728 + export FIO_BIN=/usr/src/fio-static/fio 00:03:16.728 + FIO_BIN=/usr/src/fio-static/fio 00:03:16.728 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:16.728 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:16.728 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:16.728 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:16.728 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:16.728 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:16.728 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:16.728 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:16.728 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:16.991 19:21:43 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:16.991 19:21:43 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:16.991 19:21:43 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:16.991 19:21:43 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:16.991 19:21:43 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:16.991 19:21:43 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:16.991 19:21:43 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:16.991 19:21:43 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:16.991 19:21:43 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:16.991 19:21:43 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:16.991 19:21:43 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:16.991 19:21:43 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:03:16.991 19:21:43 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:16.991 19:21:43 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:16.991 19:21:44 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:16.991 19:21:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:16.991 19:21:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:16.991 19:21:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:16.991 19:21:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.991 19:21:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.991 19:21:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.991 19:21:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.991 19:21:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.991 19:21:44 -- paths/export.sh@5 -- $ export PATH 00:03:16.991 19:21:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.991 19:21:44 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:16.991 19:21:44 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:16.991 19:21:44 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733426504.XXXXXX 00:03:16.991 19:21:44 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733426504.oGcepk 00:03:16.991 19:21:44 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:16.991 19:21:44 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:16.991 19:21:44 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:16.991 19:21:44 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:16.991 19:21:44 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:16.991 19:21:44 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:16.991 19:21:44 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:16.991 19:21:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.991 19:21:44 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:03:16.991 19:21:44 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:16.991 19:21:44 -- pm/common@17 -- $ local monitor 00:03:16.991 19:21:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.991 19:21:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.991 19:21:44 -- pm/common@25 -- $ sleep 1 00:03:16.991 19:21:44 -- pm/common@21 -- $ date +%s 00:03:16.991 19:21:44 -- pm/common@21 -- $ date +%s 00:03:16.991 19:21:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733426504 00:03:16.991 19:21:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733426504 00:03:16.991 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733426504_collect-cpu-load.pm.log 00:03:16.991 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733426504_collect-vmstat.pm.log 00:03:17.934 19:21:45 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:17.934 19:21:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:17.934 19:21:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:17.934 19:21:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:17.934 19:21:45 -- spdk/autobuild.sh@16 -- $ date -u 00:03:17.934 Thu Dec 5 07:21:45 PM UTC 2024 00:03:17.934 19:21:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:17.934 v25.01-pre-300-ge2dfdf06c 00:03:17.934 19:21:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:17.934 19:21:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:17.934 19:21:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:17.934 19:21:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:17.934 19:21:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:17.934 ************************************ 00:03:17.934 START TEST asan 00:03:17.934 ************************************ 00:03:17.934 using asan 00:03:17.934 19:21:45 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:17.934 00:03:17.934 real 0m0.000s 00:03:17.934 user 0m0.000s 00:03:17.934 sys 0m0.000s 00:03:17.934 19:21:45 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:17.934 19:21:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:17.934 ************************************ 00:03:17.934 END TEST asan 00:03:17.934 ************************************ 00:03:17.934 19:21:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:17.934 19:21:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:17.934 19:21:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:17.934 19:21:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:18.195 19:21:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:18.195 ************************************ 00:03:18.195 START TEST ubsan 00:03:18.195 ************************************ 00:03:18.195 using ubsan 00:03:18.195 19:21:45 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:18.195 00:03:18.195 real 0m0.000s 00:03:18.195 user 0m0.000s 00:03:18.195 sys 0m0.000s 00:03:18.195 19:21:45 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:18.195 ************************************ 00:03:18.195 END TEST ubsan 00:03:18.195 ************************************ 00:03:18.195 19:21:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:18.195 19:21:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:18.195 19:21:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:18.195 19:21:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:18.195 19:21:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:18.195 19:21:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:18.195 19:21:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:18.195 19:21:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:18.195 19:21:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:18.195 19:21:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:03:18.195 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:18.196 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:18.765 Using 'verbs' RDMA provider 00:03:31.958 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:44.187 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:44.187 Creating mk/config.mk...done. 00:03:44.187 Creating mk/cc.flags.mk...done. 00:03:44.187 Type 'make' to build. 00:03:44.187 19:22:10 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:44.187 19:22:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:44.187 19:22:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:44.187 19:22:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:44.187 ************************************ 00:03:44.187 START TEST make 00:03:44.188 ************************************ 00:03:44.188 19:22:10 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:44.188 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:44.188 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:44.188 meson setup builddir \ 00:03:44.188 -Dwith-libaio=enabled \ 00:03:44.188 -Dwith-liburing=enabled \ 00:03:44.188 -Dwith-libvfn=disabled \ 00:03:44.188 -Dwith-spdk=disabled \ 00:03:44.188 -Dexamples=false \ 00:03:44.188 -Dtests=false \ 00:03:44.188 -Dtools=false && \ 00:03:44.188 meson compile -C builddir && \ 00:03:44.188 cd -) 00:03:44.188 make[1]: Nothing to be done for 'all'. 00:03:45.585 The Meson build system 00:03:45.585 Version: 1.5.0 00:03:45.585 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:45.585 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:45.585 Build type: native build 00:03:45.585 Project name: xnvme 00:03:45.585 Project version: 0.7.5 00:03:45.585 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:45.585 C linker for the host machine: cc ld.bfd 2.40-14 00:03:45.585 Host machine cpu family: x86_64 00:03:45.585 Host machine cpu: x86_64 00:03:45.585 Message: host_machine.system: linux 00:03:45.585 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:45.585 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:45.585 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:45.585 Run-time dependency threads found: YES 00:03:45.585 Has header "setupapi.h" : NO 00:03:45.585 Has header "linux/blkzoned.h" : YES 00:03:45.585 Has header "linux/blkzoned.h" : YES (cached) 00:03:45.585 Has header "libaio.h" : YES 00:03:45.585 Library aio found: YES 00:03:45.585 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:45.585 Run-time dependency liburing found: YES 2.2 00:03:45.585 Dependency libvfn skipped: feature with-libvfn disabled 00:03:45.585 Found CMake: /usr/bin/cmake (3.27.7) 00:03:45.585 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:45.585 Subproject spdk : skipped: feature with-spdk disabled 00:03:45.585 Run-time dependency appleframeworks found: NO (tried framework) 00:03:45.585 Run-time dependency appleframeworks found: NO (tried framework) 00:03:45.585 Library rt found: YES 00:03:45.585 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:45.585 Configuring xnvme_config.h using configuration 00:03:45.585 Configuring xnvme.spec using configuration 00:03:45.585 Run-time dependency bash-completion found: YES 2.11 00:03:45.585 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:45.585 Program cp found: YES (/usr/bin/cp) 00:03:45.585 Build targets in project: 3 00:03:45.585 00:03:45.585 xnvme 0.7.5 00:03:45.585 00:03:45.585 Subprojects 00:03:45.585 spdk : NO Feature 'with-spdk' disabled 00:03:45.585 00:03:45.585 User defined options 00:03:45.585 examples : false 00:03:45.585 tests : false 00:03:45.585 tools : false 00:03:45.585 with-libaio : enabled 00:03:45.585 with-liburing: enabled 00:03:45.585 with-libvfn : disabled 00:03:45.585 with-spdk : disabled 00:03:45.585 00:03:45.585 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:45.846 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:45.846 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:45.846 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:45.846 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:45.846 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:45.846 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:45.846 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:45.846 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:45.846 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:45.846 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:45.846 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:45.846 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:45.846 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:46.105 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:46.105 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:46.105 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:46.105 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:46.105 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:46.105 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:46.105 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:46.105 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:46.105 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:46.105 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:46.105 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:46.105 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:46.105 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:46.105 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:46.105 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:46.105 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:46.105 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:46.105 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:46.105 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:46.105 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:46.105 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:46.105 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:46.105 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:46.105 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:46.105 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:46.105 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:46.105 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:46.105 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:46.105 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:46.105 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:46.105 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:46.105 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:46.105 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:46.365 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:46.365 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:46.365 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:46.365 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:46.365 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:46.365 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:46.365 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:46.365 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:46.365 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:46.365 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:46.365 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:46.365 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:46.365 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:46.365 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:46.365 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:46.365 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:46.365 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:46.365 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:46.365 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:46.365 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:46.365 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:46.624 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:46.624 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:46.624 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:46.624 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:46.624 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:46.624 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:46.624 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:46.883 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:46.883 [75/76] Linking static target lib/libxnvme.a 00:03:47.142 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:47.142 INFO: autodetecting backend as ninja 00:03:47.142 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:47.142 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:55.275 The Meson build system 00:03:55.275 Version: 1.5.0 00:03:55.275 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:55.275 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:55.275 Build type: native build 00:03:55.275 Program cat found: YES (/usr/bin/cat) 00:03:55.275 Project name: DPDK 00:03:55.275 Project version: 24.03.0 00:03:55.275 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:55.275 C linker for the host machine: cc ld.bfd 2.40-14 00:03:55.275 Host machine cpu family: x86_64 00:03:55.275 Host machine cpu: x86_64 00:03:55.275 Message: ## Building in Developer Mode ## 00:03:55.275 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:55.275 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:55.275 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:55.275 Program python3 found: YES (/usr/bin/python3) 00:03:55.275 Program cat found: YES (/usr/bin/cat) 00:03:55.275 Compiler for C supports arguments -march=native: YES 00:03:55.275 Checking for size of "void *" : 8 00:03:55.275 Checking for size of "void *" : 8 (cached) 00:03:55.275 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:55.275 Library m found: YES 00:03:55.275 Library numa found: YES 00:03:55.275 Has header "numaif.h" : YES 00:03:55.275 Library fdt found: NO 00:03:55.275 Library execinfo found: NO 00:03:55.275 Has header "execinfo.h" : YES 00:03:55.275 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:55.275 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:55.275 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:55.275 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:55.275 Run-time dependency openssl found: YES 3.1.1 00:03:55.275 Run-time dependency libpcap found: YES 1.10.4 00:03:55.275 Has header "pcap.h" with dependency libpcap: YES 00:03:55.275 Compiler for C supports arguments -Wcast-qual: YES 00:03:55.275 Compiler for C supports arguments -Wdeprecated: YES 00:03:55.275 Compiler for C supports arguments -Wformat: YES 00:03:55.275 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:55.275 Compiler for C supports arguments -Wformat-security: NO 00:03:55.275 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:55.275 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:55.275 Compiler for C supports arguments -Wnested-externs: YES 00:03:55.275 Compiler for C supports arguments -Wold-style-definition: YES 00:03:55.275 Compiler for C supports arguments -Wpointer-arith: YES 00:03:55.275 Compiler for C supports arguments -Wsign-compare: YES 00:03:55.275 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:55.275 Compiler for C supports arguments -Wundef: YES 00:03:55.275 Compiler for C supports arguments -Wwrite-strings: YES 00:03:55.275 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:55.275 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:55.275 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:55.275 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:55.275 Program objdump found: YES (/usr/bin/objdump) 00:03:55.275 Compiler for C supports arguments -mavx512f: YES 00:03:55.275 Checking if "AVX512 checking" compiles: YES 00:03:55.275 Fetching value of define "__SSE4_2__" : 1 00:03:55.275 Fetching value of define "__AES__" : 1 00:03:55.275 Fetching value of define "__AVX__" : 1 00:03:55.275 Fetching value of define "__AVX2__" : 1 00:03:55.275 Fetching value of define "__AVX512BW__" : 1 00:03:55.275 Fetching value of define "__AVX512CD__" : 1 00:03:55.275 Fetching value of define "__AVX512DQ__" : 1 00:03:55.275 Fetching value of define "__AVX512F__" : 1 00:03:55.275 Fetching value of define "__AVX512VL__" : 1 00:03:55.275 Fetching value of define "__PCLMUL__" : 1 00:03:55.275 Fetching value of define "__RDRND__" : 1 00:03:55.275 Fetching value of define "__RDSEED__" : 1 00:03:55.275 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:55.275 Fetching value of define "__znver1__" : (undefined) 00:03:55.275 Fetching value of define "__znver2__" : (undefined) 00:03:55.275 Fetching value of define "__znver3__" : (undefined) 00:03:55.275 Fetching value of define "__znver4__" : (undefined) 00:03:55.275 Library asan found: YES 00:03:55.275 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:55.275 Message: lib/log: Defining dependency "log" 00:03:55.275 Message: lib/kvargs: Defining dependency "kvargs" 00:03:55.275 Message: lib/telemetry: Defining dependency "telemetry" 00:03:55.275 Library rt found: YES 00:03:55.275 Checking for function "getentropy" : NO 00:03:55.275 Message: lib/eal: Defining dependency "eal" 00:03:55.275 Message: lib/ring: Defining dependency "ring" 00:03:55.275 Message: lib/rcu: Defining dependency "rcu" 00:03:55.275 Message: lib/mempool: Defining dependency "mempool" 00:03:55.275 Message: lib/mbuf: Defining dependency "mbuf" 00:03:55.275 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:55.275 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:55.275 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:55.275 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:55.275 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:55.275 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:55.275 Compiler for C supports arguments -mpclmul: YES 00:03:55.275 Compiler for C supports arguments -maes: YES 00:03:55.275 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:55.275 Compiler for C supports arguments -mavx512bw: YES 00:03:55.275 Compiler for C supports arguments -mavx512dq: YES 00:03:55.275 Compiler for C supports arguments -mavx512vl: YES 00:03:55.275 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:55.276 Compiler for C supports arguments -mavx2: YES 00:03:55.276 Compiler for C supports arguments -mavx: YES 00:03:55.276 Message: lib/net: Defining dependency "net" 00:03:55.276 Message: lib/meter: Defining dependency "meter" 00:03:55.276 Message: lib/ethdev: Defining dependency "ethdev" 00:03:55.276 Message: lib/pci: Defining dependency "pci" 00:03:55.276 Message: lib/cmdline: Defining dependency "cmdline" 00:03:55.276 Message: lib/hash: Defining dependency "hash" 00:03:55.276 Message: lib/timer: Defining dependency "timer" 00:03:55.276 Message: lib/compressdev: Defining dependency "compressdev" 00:03:55.276 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:55.276 Message: lib/dmadev: Defining dependency "dmadev" 00:03:55.276 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:55.276 Message: lib/power: Defining dependency "power" 00:03:55.276 Message: lib/reorder: Defining dependency "reorder" 00:03:55.276 Message: lib/security: Defining dependency "security" 00:03:55.276 Has header "linux/userfaultfd.h" : YES 00:03:55.276 Has header "linux/vduse.h" : YES 00:03:55.276 Message: lib/vhost: Defining dependency "vhost" 00:03:55.276 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:55.276 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:55.276 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:55.276 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:55.276 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:55.276 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:55.276 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:55.276 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:55.276 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:55.276 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:55.276 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:55.276 Configuring doxy-api-html.conf using configuration 00:03:55.276 Configuring doxy-api-man.conf using configuration 00:03:55.276 Program mandb found: YES (/usr/bin/mandb) 00:03:55.276 Program sphinx-build found: NO 00:03:55.276 Configuring rte_build_config.h using configuration 00:03:55.276 Message: 00:03:55.276 ================= 00:03:55.276 Applications Enabled 00:03:55.276 ================= 00:03:55.276 00:03:55.276 apps: 00:03:55.276 00:03:55.276 00:03:55.276 Message: 00:03:55.276 ================= 00:03:55.276 Libraries Enabled 00:03:55.276 ================= 00:03:55.276 00:03:55.276 libs: 00:03:55.276 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:55.276 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:55.276 cryptodev, dmadev, power, reorder, security, vhost, 00:03:55.276 00:03:55.276 Message: 00:03:55.276 =============== 00:03:55.276 Drivers Enabled 00:03:55.276 =============== 00:03:55.276 00:03:55.276 common: 00:03:55.276 00:03:55.276 bus: 00:03:55.276 pci, vdev, 00:03:55.276 mempool: 00:03:55.276 ring, 00:03:55.276 dma: 00:03:55.276 00:03:55.276 net: 00:03:55.276 00:03:55.276 crypto: 00:03:55.276 00:03:55.276 compress: 00:03:55.276 00:03:55.276 vdpa: 00:03:55.276 00:03:55.276 00:03:55.276 Message: 00:03:55.276 ================= 00:03:55.276 Content Skipped 00:03:55.276 ================= 00:03:55.276 00:03:55.276 apps: 00:03:55.276 dumpcap: explicitly disabled via build config 00:03:55.276 graph: explicitly disabled via build config 00:03:55.276 pdump: explicitly disabled via build config 00:03:55.276 proc-info: explicitly disabled via build config 00:03:55.276 test-acl: explicitly disabled via build config 00:03:55.276 test-bbdev: explicitly disabled via build config 00:03:55.276 test-cmdline: explicitly disabled via build config 00:03:55.276 test-compress-perf: explicitly disabled via build config 00:03:55.276 test-crypto-perf: explicitly disabled via build config 00:03:55.276 test-dma-perf: explicitly disabled via build config 00:03:55.276 test-eventdev: explicitly disabled via build config 00:03:55.276 test-fib: explicitly disabled via build config 00:03:55.276 test-flow-perf: explicitly disabled via build config 00:03:55.276 test-gpudev: explicitly disabled via build config 00:03:55.276 test-mldev: explicitly disabled via build config 00:03:55.276 test-pipeline: explicitly disabled via build config 00:03:55.276 test-pmd: explicitly disabled via build config 00:03:55.276 test-regex: explicitly disabled via build config 00:03:55.276 test-sad: explicitly disabled via build config 00:03:55.276 test-security-perf: explicitly disabled via build config 00:03:55.276 00:03:55.276 libs: 00:03:55.276 argparse: explicitly disabled via build config 00:03:55.276 metrics: explicitly disabled via build config 00:03:55.276 acl: explicitly disabled via build config 00:03:55.276 bbdev: explicitly disabled via build config 00:03:55.276 bitratestats: explicitly disabled via build config 00:03:55.276 bpf: explicitly disabled via build config 00:03:55.276 cfgfile: explicitly disabled via build config 00:03:55.276 distributor: explicitly disabled via build config 00:03:55.276 efd: explicitly disabled via build config 00:03:55.276 eventdev: explicitly disabled via build config 00:03:55.276 dispatcher: explicitly disabled via build config 00:03:55.276 gpudev: explicitly disabled via build config 00:03:55.276 gro: explicitly disabled via build config 00:03:55.276 gso: explicitly disabled via build config 00:03:55.276 ip_frag: explicitly disabled via build config 00:03:55.276 jobstats: explicitly disabled via build config 00:03:55.276 latencystats: explicitly disabled via build config 00:03:55.276 lpm: explicitly disabled via build config 00:03:55.276 member: explicitly disabled via build config 00:03:55.276 pcapng: explicitly disabled via build config 00:03:55.276 rawdev: explicitly disabled via build config 00:03:55.276 regexdev: explicitly disabled via build config 00:03:55.276 mldev: explicitly disabled via build config 00:03:55.276 rib: explicitly disabled via build config 00:03:55.276 sched: explicitly disabled via build config 00:03:55.276 stack: explicitly disabled via build config 00:03:55.276 ipsec: explicitly disabled via build config 00:03:55.276 pdcp: explicitly disabled via build config 00:03:55.276 fib: explicitly disabled via build config 00:03:55.276 port: explicitly disabled via build config 00:03:55.276 pdump: explicitly disabled via build config 00:03:55.276 table: explicitly disabled via build config 00:03:55.276 pipeline: explicitly disabled via build config 00:03:55.276 graph: explicitly disabled via build config 00:03:55.276 node: explicitly disabled via build config 00:03:55.276 00:03:55.276 drivers: 00:03:55.276 common/cpt: not in enabled drivers build config 00:03:55.276 common/dpaax: not in enabled drivers build config 00:03:55.276 common/iavf: not in enabled drivers build config 00:03:55.276 common/idpf: not in enabled drivers build config 00:03:55.276 common/ionic: not in enabled drivers build config 00:03:55.276 common/mvep: not in enabled drivers build config 00:03:55.276 common/octeontx: not in enabled drivers build config 00:03:55.276 bus/auxiliary: not in enabled drivers build config 00:03:55.276 bus/cdx: not in enabled drivers build config 00:03:55.276 bus/dpaa: not in enabled drivers build config 00:03:55.276 bus/fslmc: not in enabled drivers build config 00:03:55.276 bus/ifpga: not in enabled drivers build config 00:03:55.276 bus/platform: not in enabled drivers build config 00:03:55.276 bus/uacce: not in enabled drivers build config 00:03:55.276 bus/vmbus: not in enabled drivers build config 00:03:55.276 common/cnxk: not in enabled drivers build config 00:03:55.276 common/mlx5: not in enabled drivers build config 00:03:55.276 common/nfp: not in enabled drivers build config 00:03:55.276 common/nitrox: not in enabled drivers build config 00:03:55.276 common/qat: not in enabled drivers build config 00:03:55.276 common/sfc_efx: not in enabled drivers build config 00:03:55.276 mempool/bucket: not in enabled drivers build config 00:03:55.276 mempool/cnxk: not in enabled drivers build config 00:03:55.276 mempool/dpaa: not in enabled drivers build config 00:03:55.276 mempool/dpaa2: not in enabled drivers build config 00:03:55.276 mempool/octeontx: not in enabled drivers build config 00:03:55.276 mempool/stack: not in enabled drivers build config 00:03:55.276 dma/cnxk: not in enabled drivers build config 00:03:55.276 dma/dpaa: not in enabled drivers build config 00:03:55.276 dma/dpaa2: not in enabled drivers build config 00:03:55.276 dma/hisilicon: not in enabled drivers build config 00:03:55.276 dma/idxd: not in enabled drivers build config 00:03:55.276 dma/ioat: not in enabled drivers build config 00:03:55.276 dma/skeleton: not in enabled drivers build config 00:03:55.276 net/af_packet: not in enabled drivers build config 00:03:55.276 net/af_xdp: not in enabled drivers build config 00:03:55.276 net/ark: not in enabled drivers build config 00:03:55.276 net/atlantic: not in enabled drivers build config 00:03:55.276 net/avp: not in enabled drivers build config 00:03:55.276 net/axgbe: not in enabled drivers build config 00:03:55.276 net/bnx2x: not in enabled drivers build config 00:03:55.276 net/bnxt: not in enabled drivers build config 00:03:55.276 net/bonding: not in enabled drivers build config 00:03:55.276 net/cnxk: not in enabled drivers build config 00:03:55.276 net/cpfl: not in enabled drivers build config 00:03:55.276 net/cxgbe: not in enabled drivers build config 00:03:55.276 net/dpaa: not in enabled drivers build config 00:03:55.276 net/dpaa2: not in enabled drivers build config 00:03:55.276 net/e1000: not in enabled drivers build config 00:03:55.276 net/ena: not in enabled drivers build config 00:03:55.276 net/enetc: not in enabled drivers build config 00:03:55.276 net/enetfec: not in enabled drivers build config 00:03:55.276 net/enic: not in enabled drivers build config 00:03:55.276 net/failsafe: not in enabled drivers build config 00:03:55.276 net/fm10k: not in enabled drivers build config 00:03:55.276 net/gve: not in enabled drivers build config 00:03:55.276 net/hinic: not in enabled drivers build config 00:03:55.276 net/hns3: not in enabled drivers build config 00:03:55.276 net/i40e: not in enabled drivers build config 00:03:55.276 net/iavf: not in enabled drivers build config 00:03:55.276 net/ice: not in enabled drivers build config 00:03:55.276 net/idpf: not in enabled drivers build config 00:03:55.276 net/igc: not in enabled drivers build config 00:03:55.277 net/ionic: not in enabled drivers build config 00:03:55.277 net/ipn3ke: not in enabled drivers build config 00:03:55.277 net/ixgbe: not in enabled drivers build config 00:03:55.277 net/mana: not in enabled drivers build config 00:03:55.277 net/memif: not in enabled drivers build config 00:03:55.277 net/mlx4: not in enabled drivers build config 00:03:55.277 net/mlx5: not in enabled drivers build config 00:03:55.277 net/mvneta: not in enabled drivers build config 00:03:55.277 net/mvpp2: not in enabled drivers build config 00:03:55.277 net/netvsc: not in enabled drivers build config 00:03:55.277 net/nfb: not in enabled drivers build config 00:03:55.277 net/nfp: not in enabled drivers build config 00:03:55.277 net/ngbe: not in enabled drivers build config 00:03:55.277 net/null: not in enabled drivers build config 00:03:55.277 net/octeontx: not in enabled drivers build config 00:03:55.277 net/octeon_ep: not in enabled drivers build config 00:03:55.277 net/pcap: not in enabled drivers build config 00:03:55.277 net/pfe: not in enabled drivers build config 00:03:55.277 net/qede: not in enabled drivers build config 00:03:55.277 net/ring: not in enabled drivers build config 00:03:55.277 net/sfc: not in enabled drivers build config 00:03:55.277 net/softnic: not in enabled drivers build config 00:03:55.277 net/tap: not in enabled drivers build config 00:03:55.277 net/thunderx: not in enabled drivers build config 00:03:55.277 net/txgbe: not in enabled drivers build config 00:03:55.277 net/vdev_netvsc: not in enabled drivers build config 00:03:55.277 net/vhost: not in enabled drivers build config 00:03:55.277 net/virtio: not in enabled drivers build config 00:03:55.277 net/vmxnet3: not in enabled drivers build config 00:03:55.277 raw/*: missing internal dependency, "rawdev" 00:03:55.277 crypto/armv8: not in enabled drivers build config 00:03:55.277 crypto/bcmfs: not in enabled drivers build config 00:03:55.277 crypto/caam_jr: not in enabled drivers build config 00:03:55.277 crypto/ccp: not in enabled drivers build config 00:03:55.277 crypto/cnxk: not in enabled drivers build config 00:03:55.277 crypto/dpaa_sec: not in enabled drivers build config 00:03:55.277 crypto/dpaa2_sec: not in enabled drivers build config 00:03:55.277 crypto/ipsec_mb: not in enabled drivers build config 00:03:55.277 crypto/mlx5: not in enabled drivers build config 00:03:55.277 crypto/mvsam: not in enabled drivers build config 00:03:55.277 crypto/nitrox: not in enabled drivers build config 00:03:55.277 crypto/null: not in enabled drivers build config 00:03:55.277 crypto/octeontx: not in enabled drivers build config 00:03:55.277 crypto/openssl: not in enabled drivers build config 00:03:55.277 crypto/scheduler: not in enabled drivers build config 00:03:55.277 crypto/uadk: not in enabled drivers build config 00:03:55.277 crypto/virtio: not in enabled drivers build config 00:03:55.277 compress/isal: not in enabled drivers build config 00:03:55.277 compress/mlx5: not in enabled drivers build config 00:03:55.277 compress/nitrox: not in enabled drivers build config 00:03:55.277 compress/octeontx: not in enabled drivers build config 00:03:55.277 compress/zlib: not in enabled drivers build config 00:03:55.277 regex/*: missing internal dependency, "regexdev" 00:03:55.277 ml/*: missing internal dependency, "mldev" 00:03:55.277 vdpa/ifc: not in enabled drivers build config 00:03:55.277 vdpa/mlx5: not in enabled drivers build config 00:03:55.277 vdpa/nfp: not in enabled drivers build config 00:03:55.277 vdpa/sfc: not in enabled drivers build config 00:03:55.277 event/*: missing internal dependency, "eventdev" 00:03:55.277 baseband/*: missing internal dependency, "bbdev" 00:03:55.277 gpu/*: missing internal dependency, "gpudev" 00:03:55.277 00:03:55.277 00:03:55.277 Build targets in project: 84 00:03:55.277 00:03:55.277 DPDK 24.03.0 00:03:55.277 00:03:55.277 User defined options 00:03:55.277 buildtype : debug 00:03:55.277 default_library : shared 00:03:55.277 libdir : lib 00:03:55.277 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:55.277 b_sanitize : address 00:03:55.277 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:55.277 c_link_args : 00:03:55.277 cpu_instruction_set: native 00:03:55.277 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:55.277 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:55.277 enable_docs : false 00:03:55.277 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:55.277 enable_kmods : false 00:03:55.277 max_lcores : 128 00:03:55.277 tests : false 00:03:55.277 00:03:55.277 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:55.277 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:55.277 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:55.277 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:55.277 [3/267] Linking static target lib/librte_kvargs.a 00:03:55.277 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:55.277 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:55.277 [6/267] Linking static target lib/librte_log.a 00:03:55.277 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:55.277 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:55.277 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:55.277 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:55.277 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:55.277 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:55.539 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:55.539 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:55.539 [15/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.539 [16/267] Linking static target lib/librte_telemetry.a 00:03:55.539 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:55.539 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:55.801 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:56.063 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:56.063 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:56.063 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:56.063 [23/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.063 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:56.063 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:56.063 [26/267] Linking target lib/librte_log.so.24.1 00:03:56.063 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:56.063 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:56.325 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:56.325 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:56.325 [31/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.616 [32/267] Linking target lib/librte_kvargs.so.24.1 00:03:56.616 [33/267] Linking target lib/librte_telemetry.so.24.1 00:03:56.616 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:56.616 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:56.616 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:56.616 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:56.616 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:56.616 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:56.878 [40/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:56.878 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:56.878 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:56.878 [43/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:56.878 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:56.878 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:57.138 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:57.138 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:57.138 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:57.397 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:57.398 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:57.398 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:57.398 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:57.398 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:57.398 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:57.398 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:57.657 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:57.657 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:57.657 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:57.917 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:57.917 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:57.917 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:57.917 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:57.917 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:58.177 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:58.177 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:58.177 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:58.177 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:58.437 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:58.437 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:58.437 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:58.437 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:58.437 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:58.437 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:58.437 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:58.437 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:58.437 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:58.697 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:58.697 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:58.697 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:58.697 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:58.697 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:58.959 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:58.959 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:59.219 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:59.219 [85/267] Linking static target lib/librte_ring.a 00:03:59.219 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:59.219 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:59.219 [88/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:59.219 [89/267] Linking static target lib/librte_rcu.a 00:03:59.219 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:59.219 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:59.219 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:59.219 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:59.219 [94/267] Linking static target lib/librte_eal.a 00:03:59.219 [95/267] Linking static target lib/librte_mempool.a 00:03:59.478 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:59.737 [97/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:59.737 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.737 [99/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.737 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:59.737 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:59.737 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:59.737 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:59.997 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:59.997 [105/267] Linking static target lib/librte_mbuf.a 00:03:59.997 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:59.997 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:59.997 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:59.997 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:59.997 [110/267] Linking static target lib/librte_net.a 00:04:00.255 [111/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:00.255 [112/267] Linking static target lib/librte_meter.a 00:04:00.515 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:00.515 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:00.515 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.515 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.515 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:00.515 [118/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.809 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:00.809 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.096 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:01.096 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:01.096 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:01.096 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:01.096 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:01.096 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:01.355 [127/267] Linking static target lib/librte_pci.a 00:04:01.355 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:01.355 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:01.355 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:01.355 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:01.355 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:01.355 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:01.355 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:01.615 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:01.615 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:01.615 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:01.615 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:01.615 [139/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.615 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:01.615 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:01.615 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:01.615 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:01.615 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:01.615 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:01.875 [146/267] Linking static target lib/librte_cmdline.a 00:04:01.876 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:01.876 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:02.136 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:02.136 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:02.136 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:02.136 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:02.136 [153/267] Linking static target lib/librte_timer.a 00:04:02.136 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:02.396 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:02.396 [156/267] Linking static target lib/librte_ethdev.a 00:04:02.396 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:02.396 [158/267] Linking static target lib/librte_compressdev.a 00:04:02.396 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:02.655 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:02.655 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:02.655 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:02.655 [163/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.916 [164/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:02.916 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:02.916 [166/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:02.916 [167/267] Linking static target lib/librte_dmadev.a 00:04:02.916 [168/267] Linking static target lib/librte_hash.a 00:04:03.177 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:03.177 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:03.177 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:03.177 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.177 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:03.177 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.437 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:03.437 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:03.438 [177/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:03.697 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:03.697 [179/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.697 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:03.697 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:03.697 [182/267] Linking static target lib/librte_power.a 00:04:03.697 [183/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:03.956 [184/267] Linking static target lib/librte_cryptodev.a 00:04:03.956 [185/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.956 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:04.217 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:04.217 [188/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:04.217 [189/267] Linking static target lib/librte_reorder.a 00:04:04.217 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:04.217 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:04.217 [192/267] Linking static target lib/librte_security.a 00:04:04.788 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.788 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:04.788 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:04.788 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.048 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:05.048 [198/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.048 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:05.310 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:05.310 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:05.310 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:05.310 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:05.310 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:05.571 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:05.571 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:05.571 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:05.571 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:05.571 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:05.834 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.834 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:05.834 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:05.834 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:05.834 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:05.834 [215/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:05.834 [216/267] Linking static target drivers/librte_bus_vdev.a 00:04:06.096 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:06.096 [218/267] Linking static target drivers/librte_bus_pci.a 00:04:06.096 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:06.096 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:06.096 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:06.096 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:06.096 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:06.096 [224/267] Linking static target drivers/librte_mempool_ring.a 00:04:06.356 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.356 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.927 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:07.871 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.871 [229/267] Linking target lib/librte_eal.so.24.1 00:04:08.132 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:08.132 [231/267] Linking target lib/librte_meter.so.24.1 00:04:08.132 [232/267] Linking target lib/librte_ring.so.24.1 00:04:08.132 [233/267] Linking target lib/librte_dmadev.so.24.1 00:04:08.132 [234/267] Linking target lib/librte_timer.so.24.1 00:04:08.132 [235/267] Linking target lib/librte_pci.so.24.1 00:04:08.132 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:04:08.132 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:08.132 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:08.132 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:08.132 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:08.392 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:08.392 [242/267] Linking target lib/librte_rcu.so.24.1 00:04:08.392 [243/267] Linking target lib/librte_mempool.so.24.1 00:04:08.392 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:04:08.392 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:08.392 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:08.392 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:04:08.392 [248/267] Linking target lib/librte_mbuf.so.24.1 00:04:08.651 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:08.651 [250/267] Linking target lib/librte_net.so.24.1 00:04:08.651 [251/267] Linking target lib/librte_compressdev.so.24.1 00:04:08.651 [252/267] Linking target lib/librte_reorder.so.24.1 00:04:08.651 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:04:08.651 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:08.651 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:08.651 [256/267] Linking target lib/librte_cmdline.so.24.1 00:04:08.651 [257/267] Linking target lib/librte_hash.so.24.1 00:04:08.651 [258/267] Linking target lib/librte_security.so.24.1 00:04:08.912 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:08.912 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.912 [261/267] Linking target lib/librte_ethdev.so.24.1 00:04:09.173 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:09.173 [263/267] Linking target lib/librte_power.so.24.1 00:04:10.183 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:10.183 [265/267] Linking static target lib/librte_vhost.a 00:04:11.590 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.590 [267/267] Linking target lib/librte_vhost.so.24.1 00:04:11.590 INFO: autodetecting backend as ninja 00:04:11.590 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:29.713 CC lib/ut_mock/mock.o 00:04:29.713 CC lib/ut/ut.o 00:04:29.713 CC lib/log/log_flags.o 00:04:29.713 CC lib/log/log.o 00:04:29.713 CC lib/log/log_deprecated.o 00:04:29.713 LIB libspdk_ut.a 00:04:29.713 LIB libspdk_ut_mock.a 00:04:29.713 LIB libspdk_log.a 00:04:29.713 SO libspdk_ut.so.2.0 00:04:29.713 SO libspdk_ut_mock.so.6.0 00:04:29.713 SO libspdk_log.so.7.1 00:04:29.713 SYMLINK libspdk_ut.so 00:04:29.713 SYMLINK libspdk_ut_mock.so 00:04:29.713 SYMLINK libspdk_log.so 00:04:29.713 CC lib/util/base64.o 00:04:29.713 CC lib/util/bit_array.o 00:04:29.713 CC lib/util/cpuset.o 00:04:29.713 CC lib/util/crc16.o 00:04:29.713 CC lib/util/crc32.o 00:04:29.713 CC lib/util/crc32c.o 00:04:29.713 CC lib/dma/dma.o 00:04:29.713 CC lib/ioat/ioat.o 00:04:29.713 CXX lib/trace_parser/trace.o 00:04:29.713 CC lib/vfio_user/host/vfio_user_pci.o 00:04:29.713 CC lib/util/crc32_ieee.o 00:04:29.713 CC lib/util/crc64.o 00:04:29.713 CC lib/util/dif.o 00:04:29.713 CC lib/util/fd.o 00:04:29.713 LIB libspdk_dma.a 00:04:29.713 CC lib/util/fd_group.o 00:04:29.713 SO libspdk_dma.so.5.0 00:04:29.713 CC lib/util/file.o 00:04:29.713 CC lib/util/hexlify.o 00:04:29.713 CC lib/util/iov.o 00:04:29.713 SYMLINK libspdk_dma.so 00:04:29.713 CC lib/vfio_user/host/vfio_user.o 00:04:29.713 LIB libspdk_ioat.a 00:04:29.713 CC lib/util/math.o 00:04:29.713 SO libspdk_ioat.so.7.0 00:04:29.713 SYMLINK libspdk_ioat.so 00:04:29.713 CC lib/util/net.o 00:04:29.713 CC lib/util/pipe.o 00:04:29.713 CC lib/util/strerror_tls.o 00:04:29.713 CC lib/util/string.o 00:04:29.713 CC lib/util/uuid.o 00:04:29.713 CC lib/util/xor.o 00:04:29.713 CC lib/util/zipf.o 00:04:29.713 LIB libspdk_vfio_user.a 00:04:29.973 CC lib/util/md5.o 00:04:29.973 SO libspdk_vfio_user.so.5.0 00:04:29.973 SYMLINK libspdk_vfio_user.so 00:04:30.233 LIB libspdk_util.a 00:04:30.233 SO libspdk_util.so.10.1 00:04:30.233 LIB libspdk_trace_parser.a 00:04:30.493 SYMLINK libspdk_util.so 00:04:30.493 SO libspdk_trace_parser.so.6.0 00:04:30.493 SYMLINK libspdk_trace_parser.so 00:04:30.493 CC lib/conf/conf.o 00:04:30.493 CC lib/env_dpdk/memory.o 00:04:30.493 CC lib/env_dpdk/env.o 00:04:30.493 CC lib/json/json_parse.o 00:04:30.493 CC lib/vmd/vmd.o 00:04:30.493 CC lib/env_dpdk/pci.o 00:04:30.493 CC lib/json/json_util.o 00:04:30.493 CC lib/env_dpdk/init.o 00:04:30.493 CC lib/idxd/idxd.o 00:04:30.493 CC lib/rdma_utils/rdma_utils.o 00:04:30.753 CC lib/json/json_write.o 00:04:30.753 LIB libspdk_conf.a 00:04:30.753 SO libspdk_conf.so.6.0 00:04:30.753 LIB libspdk_rdma_utils.a 00:04:30.753 SYMLINK libspdk_conf.so 00:04:30.753 CC lib/idxd/idxd_user.o 00:04:30.753 CC lib/vmd/led.o 00:04:30.753 SO libspdk_rdma_utils.so.1.0 00:04:31.015 SYMLINK libspdk_rdma_utils.so 00:04:31.015 CC lib/env_dpdk/threads.o 00:04:31.015 CC lib/idxd/idxd_kernel.o 00:04:31.015 LIB libspdk_json.a 00:04:31.015 SO libspdk_json.so.6.0 00:04:31.015 CC lib/env_dpdk/pci_ioat.o 00:04:31.015 CC lib/env_dpdk/pci_virtio.o 00:04:31.015 SYMLINK libspdk_json.so 00:04:31.015 CC lib/env_dpdk/pci_vmd.o 00:04:31.015 CC lib/env_dpdk/pci_idxd.o 00:04:31.015 CC lib/env_dpdk/pci_event.o 00:04:31.015 CC lib/env_dpdk/sigbus_handler.o 00:04:31.015 CC lib/env_dpdk/pci_dpdk.o 00:04:31.277 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:31.277 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:31.277 LIB libspdk_vmd.a 00:04:31.277 LIB libspdk_idxd.a 00:04:31.277 CC lib/rdma_provider/common.o 00:04:31.277 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:31.277 SO libspdk_vmd.so.6.0 00:04:31.277 SO libspdk_idxd.so.12.1 00:04:31.277 SYMLINK libspdk_vmd.so 00:04:31.277 SYMLINK libspdk_idxd.so 00:04:31.277 CC lib/jsonrpc/jsonrpc_server.o 00:04:31.277 CC lib/jsonrpc/jsonrpc_client.o 00:04:31.277 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:31.277 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:31.538 LIB libspdk_rdma_provider.a 00:04:31.538 SO libspdk_rdma_provider.so.7.0 00:04:31.538 SYMLINK libspdk_rdma_provider.so 00:04:31.538 LIB libspdk_jsonrpc.a 00:04:31.538 SO libspdk_jsonrpc.so.6.0 00:04:31.800 SYMLINK libspdk_jsonrpc.so 00:04:32.061 CC lib/rpc/rpc.o 00:04:32.061 LIB libspdk_env_dpdk.a 00:04:32.061 SO libspdk_env_dpdk.so.15.1 00:04:32.061 LIB libspdk_rpc.a 00:04:32.323 SO libspdk_rpc.so.6.0 00:04:32.323 SYMLINK libspdk_env_dpdk.so 00:04:32.323 SYMLINK libspdk_rpc.so 00:04:32.323 CC lib/notify/notify.o 00:04:32.323 CC lib/notify/notify_rpc.o 00:04:32.323 CC lib/keyring/keyring.o 00:04:32.323 CC lib/keyring/keyring_rpc.o 00:04:32.585 CC lib/trace/trace.o 00:04:32.585 CC lib/trace/trace_rpc.o 00:04:32.585 CC lib/trace/trace_flags.o 00:04:32.585 LIB libspdk_notify.a 00:04:32.585 SO libspdk_notify.so.6.0 00:04:32.585 SYMLINK libspdk_notify.so 00:04:32.585 LIB libspdk_keyring.a 00:04:32.585 SO libspdk_keyring.so.2.0 00:04:32.585 LIB libspdk_trace.a 00:04:32.845 SO libspdk_trace.so.11.0 00:04:32.845 SYMLINK libspdk_keyring.so 00:04:32.845 SYMLINK libspdk_trace.so 00:04:33.105 CC lib/thread/thread.o 00:04:33.105 CC lib/thread/iobuf.o 00:04:33.105 CC lib/sock/sock.o 00:04:33.105 CC lib/sock/sock_rpc.o 00:04:33.365 LIB libspdk_sock.a 00:04:33.365 SO libspdk_sock.so.10.0 00:04:33.626 SYMLINK libspdk_sock.so 00:04:33.888 CC lib/nvme/nvme_ctrlr.o 00:04:33.888 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:33.888 CC lib/nvme/nvme_fabric.o 00:04:33.888 CC lib/nvme/nvme_ns_cmd.o 00:04:33.888 CC lib/nvme/nvme_ns.o 00:04:33.888 CC lib/nvme/nvme_pcie_common.o 00:04:33.888 CC lib/nvme/nvme_pcie.o 00:04:33.888 CC lib/nvme/nvme_qpair.o 00:04:33.888 CC lib/nvme/nvme.o 00:04:34.458 CC lib/nvme/nvme_quirks.o 00:04:34.458 CC lib/nvme/nvme_transport.o 00:04:34.458 CC lib/nvme/nvme_discovery.o 00:04:34.458 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:34.458 LIB libspdk_thread.a 00:04:34.458 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:34.458 SO libspdk_thread.so.11.0 00:04:34.779 CC lib/nvme/nvme_tcp.o 00:04:34.779 CC lib/nvme/nvme_opal.o 00:04:34.779 SYMLINK libspdk_thread.so 00:04:34.779 CC lib/nvme/nvme_io_msg.o 00:04:34.779 CC lib/nvme/nvme_poll_group.o 00:04:34.779 CC lib/nvme/nvme_zns.o 00:04:35.071 CC lib/nvme/nvme_stubs.o 00:04:35.071 CC lib/nvme/nvme_auth.o 00:04:35.071 CC lib/accel/accel.o 00:04:35.071 CC lib/accel/accel_rpc.o 00:04:35.071 CC lib/nvme/nvme_cuse.o 00:04:35.331 CC lib/nvme/nvme_rdma.o 00:04:35.331 CC lib/accel/accel_sw.o 00:04:35.591 CC lib/blob/blobstore.o 00:04:35.591 CC lib/init/json_config.o 00:04:35.591 CC lib/virtio/virtio.o 00:04:35.591 CC lib/virtio/virtio_vhost_user.o 00:04:35.851 CC lib/init/subsystem.o 00:04:35.851 CC lib/init/subsystem_rpc.o 00:04:36.110 CC lib/init/rpc.o 00:04:36.110 CC lib/virtio/virtio_vfio_user.o 00:04:36.110 CC lib/virtio/virtio_pci.o 00:04:36.110 CC lib/blob/request.o 00:04:36.110 CC lib/fsdev/fsdev.o 00:04:36.110 CC lib/blob/zeroes.o 00:04:36.110 CC lib/blob/blob_bs_dev.o 00:04:36.110 LIB libspdk_init.a 00:04:36.110 SO libspdk_init.so.6.0 00:04:36.369 SYMLINK libspdk_init.so 00:04:36.370 CC lib/fsdev/fsdev_io.o 00:04:36.370 CC lib/fsdev/fsdev_rpc.o 00:04:36.370 LIB libspdk_accel.a 00:04:36.370 SO libspdk_accel.so.16.0 00:04:36.370 LIB libspdk_virtio.a 00:04:36.370 SO libspdk_virtio.so.7.0 00:04:36.370 SYMLINK libspdk_accel.so 00:04:36.370 CC lib/event/app.o 00:04:36.370 CC lib/event/reactor.o 00:04:36.370 CC lib/event/log_rpc.o 00:04:36.370 CC lib/event/app_rpc.o 00:04:36.370 SYMLINK libspdk_virtio.so 00:04:36.370 CC lib/event/scheduler_static.o 00:04:36.629 CC lib/bdev/bdev.o 00:04:36.629 CC lib/bdev/bdev_rpc.o 00:04:36.629 CC lib/bdev/bdev_zone.o 00:04:36.629 CC lib/bdev/part.o 00:04:36.629 CC lib/bdev/scsi_nvme.o 00:04:36.629 LIB libspdk_nvme.a 00:04:36.629 LIB libspdk_fsdev.a 00:04:36.629 SO libspdk_fsdev.so.2.0 00:04:36.889 SYMLINK libspdk_fsdev.so 00:04:36.889 SO libspdk_nvme.so.15.0 00:04:36.889 LIB libspdk_event.a 00:04:36.889 SO libspdk_event.so.14.0 00:04:36.889 SYMLINK libspdk_event.so 00:04:37.149 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:37.149 SYMLINK libspdk_nvme.so 00:04:37.719 LIB libspdk_fuse_dispatcher.a 00:04:37.719 SO libspdk_fuse_dispatcher.so.1.0 00:04:37.719 SYMLINK libspdk_fuse_dispatcher.so 00:04:39.102 LIB libspdk_blob.a 00:04:39.102 SO libspdk_blob.so.12.0 00:04:39.364 SYMLINK libspdk_blob.so 00:04:39.364 LIB libspdk_bdev.a 00:04:39.624 SO libspdk_bdev.so.17.0 00:04:39.624 CC lib/blobfs/blobfs.o 00:04:39.624 CC lib/blobfs/tree.o 00:04:39.624 CC lib/lvol/lvol.o 00:04:39.624 SYMLINK libspdk_bdev.so 00:04:39.886 CC lib/scsi/dev.o 00:04:39.886 CC lib/scsi/lun.o 00:04:39.886 CC lib/scsi/port.o 00:04:39.886 CC lib/nbd/nbd.o 00:04:39.886 CC lib/scsi/scsi.o 00:04:39.886 CC lib/nvmf/ctrlr.o 00:04:39.886 CC lib/ftl/ftl_core.o 00:04:39.886 CC lib/ublk/ublk.o 00:04:39.886 CC lib/ftl/ftl_init.o 00:04:39.886 CC lib/ftl/ftl_layout.o 00:04:39.886 CC lib/ftl/ftl_debug.o 00:04:40.148 CC lib/scsi/scsi_bdev.o 00:04:40.148 CC lib/ublk/ublk_rpc.o 00:04:40.148 CC lib/nvmf/ctrlr_discovery.o 00:04:40.148 CC lib/nbd/nbd_rpc.o 00:04:40.148 CC lib/ftl/ftl_io.o 00:04:40.148 CC lib/ftl/ftl_sb.o 00:04:40.148 CC lib/nvmf/ctrlr_bdev.o 00:04:40.409 LIB libspdk_nbd.a 00:04:40.409 LIB libspdk_blobfs.a 00:04:40.409 SO libspdk_nbd.so.7.0 00:04:40.409 SO libspdk_blobfs.so.11.0 00:04:40.409 SYMLINK libspdk_nbd.so 00:04:40.409 CC lib/nvmf/subsystem.o 00:04:40.409 LIB libspdk_ublk.a 00:04:40.409 CC lib/scsi/scsi_pr.o 00:04:40.409 SYMLINK libspdk_blobfs.so 00:04:40.409 CC lib/nvmf/nvmf.o 00:04:40.409 CC lib/ftl/ftl_l2p.o 00:04:40.409 SO libspdk_ublk.so.3.0 00:04:40.409 LIB libspdk_lvol.a 00:04:40.409 SO libspdk_lvol.so.11.0 00:04:40.409 SYMLINK libspdk_ublk.so 00:04:40.409 CC lib/nvmf/nvmf_rpc.o 00:04:40.409 CC lib/nvmf/transport.o 00:04:40.669 SYMLINK libspdk_lvol.so 00:04:40.669 CC lib/nvmf/tcp.o 00:04:40.669 CC lib/nvmf/stubs.o 00:04:40.669 CC lib/ftl/ftl_l2p_flat.o 00:04:40.669 CC lib/scsi/scsi_rpc.o 00:04:40.930 CC lib/ftl/ftl_nv_cache.o 00:04:40.930 CC lib/scsi/task.o 00:04:40.930 CC lib/nvmf/mdns_server.o 00:04:40.930 CC lib/nvmf/rdma.o 00:04:41.190 LIB libspdk_scsi.a 00:04:41.190 SO libspdk_scsi.so.9.0 00:04:41.190 SYMLINK libspdk_scsi.so 00:04:41.190 CC lib/nvmf/auth.o 00:04:41.190 CC lib/ftl/ftl_band.o 00:04:41.450 CC lib/ftl/ftl_band_ops.o 00:04:41.450 CC lib/iscsi/conn.o 00:04:41.450 CC lib/vhost/vhost.o 00:04:41.711 CC lib/vhost/vhost_rpc.o 00:04:41.711 CC lib/iscsi/init_grp.o 00:04:41.711 CC lib/vhost/vhost_scsi.o 00:04:41.711 CC lib/ftl/ftl_writer.o 00:04:41.711 CC lib/iscsi/iscsi.o 00:04:41.971 CC lib/vhost/vhost_blk.o 00:04:41.971 CC lib/ftl/ftl_rq.o 00:04:41.971 CC lib/ftl/ftl_reloc.o 00:04:42.230 CC lib/iscsi/param.o 00:04:42.230 CC lib/iscsi/portal_grp.o 00:04:42.231 CC lib/iscsi/tgt_node.o 00:04:42.231 CC lib/iscsi/iscsi_subsystem.o 00:04:42.508 CC lib/vhost/rte_vhost_user.o 00:04:42.508 CC lib/ftl/ftl_l2p_cache.o 00:04:42.508 CC lib/ftl/ftl_p2l.o 00:04:42.508 CC lib/iscsi/iscsi_rpc.o 00:04:42.508 CC lib/iscsi/task.o 00:04:42.769 CC lib/ftl/ftl_p2l_log.o 00:04:42.769 CC lib/ftl/mngt/ftl_mngt.o 00:04:42.769 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:42.769 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:42.769 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:42.769 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:43.030 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:43.030 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:43.030 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:43.030 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:43.030 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:43.030 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:43.030 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:43.030 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:43.291 CC lib/ftl/utils/ftl_conf.o 00:04:43.291 CC lib/ftl/utils/ftl_md.o 00:04:43.291 CC lib/ftl/utils/ftl_mempool.o 00:04:43.291 LIB libspdk_iscsi.a 00:04:43.291 LIB libspdk_nvmf.a 00:04:43.291 CC lib/ftl/utils/ftl_bitmap.o 00:04:43.291 CC lib/ftl/utils/ftl_property.o 00:04:43.291 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:43.291 SO libspdk_iscsi.so.8.0 00:04:43.291 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:43.291 SO libspdk_nvmf.so.20.0 00:04:43.291 LIB libspdk_vhost.a 00:04:43.552 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:43.552 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:43.552 SYMLINK libspdk_iscsi.so 00:04:43.552 SO libspdk_vhost.so.8.0 00:04:43.552 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:43.552 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:43.552 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:43.552 SYMLINK libspdk_vhost.so 00:04:43.552 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:43.552 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:43.552 SYMLINK libspdk_nvmf.so 00:04:43.552 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:43.552 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:43.552 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:43.552 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:43.552 CC lib/ftl/base/ftl_base_dev.o 00:04:43.814 CC lib/ftl/base/ftl_base_bdev.o 00:04:43.814 CC lib/ftl/ftl_trace.o 00:04:44.075 LIB libspdk_ftl.a 00:04:44.075 SO libspdk_ftl.so.9.0 00:04:44.336 SYMLINK libspdk_ftl.so 00:04:44.622 CC module/env_dpdk/env_dpdk_rpc.o 00:04:44.882 CC module/keyring/linux/keyring.o 00:04:44.882 CC module/blob/bdev/blob_bdev.o 00:04:44.882 CC module/fsdev/aio/fsdev_aio.o 00:04:44.882 CC module/accel/ioat/accel_ioat.o 00:04:44.882 CC module/accel/error/accel_error.o 00:04:44.882 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:44.882 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:44.882 CC module/sock/posix/posix.o 00:04:44.882 CC module/keyring/file/keyring.o 00:04:44.882 LIB libspdk_env_dpdk_rpc.a 00:04:44.882 SO libspdk_env_dpdk_rpc.so.6.0 00:04:44.882 SYMLINK libspdk_env_dpdk_rpc.so 00:04:44.882 CC module/keyring/linux/keyring_rpc.o 00:04:44.882 CC module/accel/error/accel_error_rpc.o 00:04:44.882 CC module/keyring/file/keyring_rpc.o 00:04:44.882 LIB libspdk_scheduler_dpdk_governor.a 00:04:44.882 CC module/accel/ioat/accel_ioat_rpc.o 00:04:44.882 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:44.882 LIB libspdk_scheduler_dynamic.a 00:04:44.882 SO libspdk_scheduler_dynamic.so.4.0 00:04:44.882 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:45.143 LIB libspdk_keyring_linux.a 00:04:45.143 LIB libspdk_accel_error.a 00:04:45.143 SYMLINK libspdk_scheduler_dynamic.so 00:04:45.143 LIB libspdk_blob_bdev.a 00:04:45.143 LIB libspdk_accel_ioat.a 00:04:45.143 LIB libspdk_keyring_file.a 00:04:45.143 SO libspdk_accel_error.so.2.0 00:04:45.143 SO libspdk_keyring_linux.so.1.0 00:04:45.143 SO libspdk_blob_bdev.so.12.0 00:04:45.143 SO libspdk_accel_ioat.so.6.0 00:04:45.143 SO libspdk_keyring_file.so.2.0 00:04:45.143 SYMLINK libspdk_accel_error.so 00:04:45.143 SYMLINK libspdk_blob_bdev.so 00:04:45.143 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:45.143 SYMLINK libspdk_keyring_file.so 00:04:45.143 SYMLINK libspdk_accel_ioat.so 00:04:45.143 CC module/fsdev/aio/linux_aio_mgr.o 00:04:45.143 SYMLINK libspdk_keyring_linux.so 00:04:45.143 CC module/scheduler/gscheduler/gscheduler.o 00:04:45.143 CC module/accel/iaa/accel_iaa.o 00:04:45.143 CC module/accel/iaa/accel_iaa_rpc.o 00:04:45.143 CC module/accel/dsa/accel_dsa.o 00:04:45.405 LIB libspdk_scheduler_gscheduler.a 00:04:45.405 CC module/accel/dsa/accel_dsa_rpc.o 00:04:45.405 SO libspdk_scheduler_gscheduler.so.4.0 00:04:45.405 CC module/bdev/delay/vbdev_delay.o 00:04:45.405 SYMLINK libspdk_scheduler_gscheduler.so 00:04:45.405 LIB libspdk_accel_iaa.a 00:04:45.405 CC module/blobfs/bdev/blobfs_bdev.o 00:04:45.405 CC module/bdev/error/vbdev_error.o 00:04:45.405 SO libspdk_accel_iaa.so.3.0 00:04:45.405 CC module/bdev/error/vbdev_error_rpc.o 00:04:45.405 CC module/bdev/gpt/gpt.o 00:04:45.405 LIB libspdk_fsdev_aio.a 00:04:45.405 SO libspdk_fsdev_aio.so.1.0 00:04:45.405 LIB libspdk_accel_dsa.a 00:04:45.405 SYMLINK libspdk_accel_iaa.so 00:04:45.405 CC module/bdev/gpt/vbdev_gpt.o 00:04:45.667 SO libspdk_accel_dsa.so.5.0 00:04:45.667 LIB libspdk_sock_posix.a 00:04:45.667 CC module/bdev/lvol/vbdev_lvol.o 00:04:45.667 SO libspdk_sock_posix.so.6.0 00:04:45.667 SYMLINK libspdk_fsdev_aio.so 00:04:45.667 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:45.667 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:45.667 SYMLINK libspdk_accel_dsa.so 00:04:45.667 SYMLINK libspdk_sock_posix.so 00:04:45.667 LIB libspdk_bdev_error.a 00:04:45.667 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:45.667 SO libspdk_bdev_error.so.6.0 00:04:45.667 LIB libspdk_blobfs_bdev.a 00:04:45.667 CC module/bdev/malloc/bdev_malloc.o 00:04:45.667 SO libspdk_blobfs_bdev.so.6.0 00:04:45.667 CC module/bdev/null/bdev_null.o 00:04:45.667 CC module/bdev/nvme/bdev_nvme.o 00:04:45.667 SYMLINK libspdk_bdev_error.so 00:04:45.667 LIB libspdk_bdev_gpt.a 00:04:45.667 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:45.928 CC module/bdev/passthru/vbdev_passthru.o 00:04:45.928 SO libspdk_bdev_gpt.so.6.0 00:04:45.928 SYMLINK libspdk_blobfs_bdev.so 00:04:45.928 CC module/bdev/nvme/nvme_rpc.o 00:04:45.928 LIB libspdk_bdev_delay.a 00:04:45.928 SYMLINK libspdk_bdev_gpt.so 00:04:45.928 SO libspdk_bdev_delay.so.6.0 00:04:45.928 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:45.929 SYMLINK libspdk_bdev_delay.so 00:04:45.929 CC module/bdev/nvme/bdev_mdns_client.o 00:04:45.929 CC module/bdev/null/bdev_null_rpc.o 00:04:45.929 CC module/bdev/raid/bdev_raid.o 00:04:46.200 LIB libspdk_bdev_lvol.a 00:04:46.200 CC module/bdev/raid/bdev_raid_rpc.o 00:04:46.200 CC module/bdev/nvme/vbdev_opal.o 00:04:46.200 SO libspdk_bdev_lvol.so.6.0 00:04:46.200 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:46.200 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:46.200 LIB libspdk_bdev_malloc.a 00:04:46.200 LIB libspdk_bdev_null.a 00:04:46.200 SO libspdk_bdev_malloc.so.6.0 00:04:46.200 SYMLINK libspdk_bdev_lvol.so 00:04:46.200 SO libspdk_bdev_null.so.6.0 00:04:46.200 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:46.200 SYMLINK libspdk_bdev_malloc.so 00:04:46.200 SYMLINK libspdk_bdev_null.so 00:04:46.200 LIB libspdk_bdev_passthru.a 00:04:46.201 CC module/bdev/raid/bdev_raid_sb.o 00:04:46.201 SO libspdk_bdev_passthru.so.6.0 00:04:46.462 CC module/bdev/raid/raid0.o 00:04:46.462 SYMLINK libspdk_bdev_passthru.so 00:04:46.462 CC module/bdev/split/vbdev_split.o 00:04:46.462 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:46.462 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:46.462 CC module/bdev/xnvme/bdev_xnvme.o 00:04:46.462 CC module/bdev/aio/bdev_aio.o 00:04:46.462 CC module/bdev/ftl/bdev_ftl.o 00:04:46.462 CC module/bdev/aio/bdev_aio_rpc.o 00:04:46.462 CC module/bdev/raid/raid1.o 00:04:46.724 CC module/bdev/split/vbdev_split_rpc.o 00:04:46.724 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:46.724 LIB libspdk_bdev_zone_block.a 00:04:46.724 SO libspdk_bdev_zone_block.so.6.0 00:04:46.724 LIB libspdk_bdev_split.a 00:04:46.724 LIB libspdk_bdev_xnvme.a 00:04:46.724 LIB libspdk_bdev_aio.a 00:04:46.724 SO libspdk_bdev_split.so.6.0 00:04:46.724 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:46.724 SO libspdk_bdev_xnvme.so.3.0 00:04:46.724 SO libspdk_bdev_aio.so.6.0 00:04:46.984 CC module/bdev/raid/concat.o 00:04:46.984 SYMLINK libspdk_bdev_zone_block.so 00:04:46.984 CC module/bdev/iscsi/bdev_iscsi.o 00:04:46.984 SYMLINK libspdk_bdev_split.so 00:04:46.984 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:46.984 SYMLINK libspdk_bdev_xnvme.so 00:04:46.984 SYMLINK libspdk_bdev_aio.so 00:04:46.984 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:46.984 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:46.984 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:46.984 LIB libspdk_bdev_ftl.a 00:04:46.984 SO libspdk_bdev_ftl.so.6.0 00:04:46.984 LIB libspdk_bdev_raid.a 00:04:46.984 SYMLINK libspdk_bdev_ftl.so 00:04:47.245 SO libspdk_bdev_raid.so.6.0 00:04:47.245 SYMLINK libspdk_bdev_raid.so 00:04:47.245 LIB libspdk_bdev_iscsi.a 00:04:47.245 SO libspdk_bdev_iscsi.so.6.0 00:04:47.245 SYMLINK libspdk_bdev_iscsi.so 00:04:47.507 LIB libspdk_bdev_virtio.a 00:04:47.507 SO libspdk_bdev_virtio.so.6.0 00:04:47.507 SYMLINK libspdk_bdev_virtio.so 00:04:48.446 LIB libspdk_bdev_nvme.a 00:04:48.446 SO libspdk_bdev_nvme.so.7.1 00:04:48.707 SYMLINK libspdk_bdev_nvme.so 00:04:48.965 CC module/event/subsystems/sock/sock.o 00:04:48.965 CC module/event/subsystems/keyring/keyring.o 00:04:48.965 CC module/event/subsystems/vmd/vmd.o 00:04:48.965 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:48.965 CC module/event/subsystems/iobuf/iobuf.o 00:04:48.965 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:48.965 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:48.965 CC module/event/subsystems/scheduler/scheduler.o 00:04:49.225 CC module/event/subsystems/fsdev/fsdev.o 00:04:49.225 LIB libspdk_event_keyring.a 00:04:49.225 LIB libspdk_event_vmd.a 00:04:49.225 LIB libspdk_event_sock.a 00:04:49.225 LIB libspdk_event_vhost_blk.a 00:04:49.225 LIB libspdk_event_scheduler.a 00:04:49.225 LIB libspdk_event_iobuf.a 00:04:49.225 SO libspdk_event_keyring.so.1.0 00:04:49.225 SO libspdk_event_sock.so.5.0 00:04:49.225 LIB libspdk_event_fsdev.a 00:04:49.225 SO libspdk_event_vmd.so.6.0 00:04:49.225 SO libspdk_event_scheduler.so.4.0 00:04:49.226 SO libspdk_event_iobuf.so.3.0 00:04:49.226 SO libspdk_event_vhost_blk.so.3.0 00:04:49.226 SO libspdk_event_fsdev.so.1.0 00:04:49.226 SYMLINK libspdk_event_sock.so 00:04:49.226 SYMLINK libspdk_event_keyring.so 00:04:49.226 SYMLINK libspdk_event_scheduler.so 00:04:49.226 SYMLINK libspdk_event_vmd.so 00:04:49.226 SYMLINK libspdk_event_vhost_blk.so 00:04:49.226 SYMLINK libspdk_event_iobuf.so 00:04:49.226 SYMLINK libspdk_event_fsdev.so 00:04:49.486 CC module/event/subsystems/accel/accel.o 00:04:49.745 LIB libspdk_event_accel.a 00:04:49.745 SO libspdk_event_accel.so.6.0 00:04:49.745 SYMLINK libspdk_event_accel.so 00:04:50.006 CC module/event/subsystems/bdev/bdev.o 00:04:50.006 LIB libspdk_event_bdev.a 00:04:50.006 SO libspdk_event_bdev.so.6.0 00:04:50.275 SYMLINK libspdk_event_bdev.so 00:04:50.275 CC module/event/subsystems/scsi/scsi.o 00:04:50.275 CC module/event/subsystems/ublk/ublk.o 00:04:50.275 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:50.275 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:50.275 CC module/event/subsystems/nbd/nbd.o 00:04:50.534 LIB libspdk_event_ublk.a 00:04:50.534 LIB libspdk_event_scsi.a 00:04:50.534 SO libspdk_event_ublk.so.3.0 00:04:50.534 LIB libspdk_event_nbd.a 00:04:50.534 SO libspdk_event_scsi.so.6.0 00:04:50.534 SO libspdk_event_nbd.so.6.0 00:04:50.534 SYMLINK libspdk_event_ublk.so 00:04:50.534 LIB libspdk_event_nvmf.a 00:04:50.534 SYMLINK libspdk_event_nbd.so 00:04:50.534 SYMLINK libspdk_event_scsi.so 00:04:50.534 SO libspdk_event_nvmf.so.6.0 00:04:50.534 SYMLINK libspdk_event_nvmf.so 00:04:50.793 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:50.793 CC module/event/subsystems/iscsi/iscsi.o 00:04:50.793 LIB libspdk_event_vhost_scsi.a 00:04:51.053 SO libspdk_event_vhost_scsi.so.3.0 00:04:51.054 LIB libspdk_event_iscsi.a 00:04:51.054 SO libspdk_event_iscsi.so.6.0 00:04:51.054 SYMLINK libspdk_event_vhost_scsi.so 00:04:51.054 SYMLINK libspdk_event_iscsi.so 00:04:51.054 SO libspdk.so.6.0 00:04:51.054 SYMLINK libspdk.so 00:04:51.314 CC app/trace_record/trace_record.o 00:04:51.314 CC test/rpc_client/rpc_client_test.o 00:04:51.314 TEST_HEADER include/spdk/accel.h 00:04:51.314 TEST_HEADER include/spdk/accel_module.h 00:04:51.314 TEST_HEADER include/spdk/assert.h 00:04:51.314 TEST_HEADER include/spdk/barrier.h 00:04:51.314 CXX app/trace/trace.o 00:04:51.314 TEST_HEADER include/spdk/base64.h 00:04:51.314 TEST_HEADER include/spdk/bdev.h 00:04:51.314 TEST_HEADER include/spdk/bdev_module.h 00:04:51.314 TEST_HEADER include/spdk/bdev_zone.h 00:04:51.314 TEST_HEADER include/spdk/bit_array.h 00:04:51.314 TEST_HEADER include/spdk/bit_pool.h 00:04:51.314 TEST_HEADER include/spdk/blob_bdev.h 00:04:51.314 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:51.314 TEST_HEADER include/spdk/blobfs.h 00:04:51.314 TEST_HEADER include/spdk/blob.h 00:04:51.314 TEST_HEADER include/spdk/conf.h 00:04:51.314 TEST_HEADER include/spdk/config.h 00:04:51.314 TEST_HEADER include/spdk/cpuset.h 00:04:51.314 TEST_HEADER include/spdk/crc16.h 00:04:51.314 TEST_HEADER include/spdk/crc32.h 00:04:51.314 TEST_HEADER include/spdk/crc64.h 00:04:51.314 TEST_HEADER include/spdk/dif.h 00:04:51.314 TEST_HEADER include/spdk/dma.h 00:04:51.314 TEST_HEADER include/spdk/endian.h 00:04:51.314 TEST_HEADER include/spdk/env_dpdk.h 00:04:51.314 TEST_HEADER include/spdk/env.h 00:04:51.314 TEST_HEADER include/spdk/event.h 00:04:51.314 TEST_HEADER include/spdk/fd_group.h 00:04:51.314 TEST_HEADER include/spdk/fd.h 00:04:51.314 CC examples/util/zipf/zipf.o 00:04:51.314 TEST_HEADER include/spdk/file.h 00:04:51.314 TEST_HEADER include/spdk/fsdev.h 00:04:51.314 TEST_HEADER include/spdk/fsdev_module.h 00:04:51.314 TEST_HEADER include/spdk/ftl.h 00:04:51.314 CC examples/ioat/perf/perf.o 00:04:51.314 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:51.314 TEST_HEADER include/spdk/gpt_spec.h 00:04:51.314 TEST_HEADER include/spdk/hexlify.h 00:04:51.314 TEST_HEADER include/spdk/histogram_data.h 00:04:51.314 TEST_HEADER include/spdk/idxd.h 00:04:51.314 TEST_HEADER include/spdk/idxd_spec.h 00:04:51.314 CC test/thread/poller_perf/poller_perf.o 00:04:51.314 TEST_HEADER include/spdk/init.h 00:04:51.574 TEST_HEADER include/spdk/ioat.h 00:04:51.574 TEST_HEADER include/spdk/ioat_spec.h 00:04:51.574 TEST_HEADER include/spdk/iscsi_spec.h 00:04:51.574 TEST_HEADER include/spdk/json.h 00:04:51.574 TEST_HEADER include/spdk/jsonrpc.h 00:04:51.574 TEST_HEADER include/spdk/keyring.h 00:04:51.574 TEST_HEADER include/spdk/keyring_module.h 00:04:51.574 TEST_HEADER include/spdk/likely.h 00:04:51.574 TEST_HEADER include/spdk/log.h 00:04:51.574 TEST_HEADER include/spdk/lvol.h 00:04:51.574 TEST_HEADER include/spdk/md5.h 00:04:51.574 TEST_HEADER include/spdk/memory.h 00:04:51.574 TEST_HEADER include/spdk/mmio.h 00:04:51.574 TEST_HEADER include/spdk/nbd.h 00:04:51.574 TEST_HEADER include/spdk/net.h 00:04:51.574 TEST_HEADER include/spdk/notify.h 00:04:51.574 TEST_HEADER include/spdk/nvme.h 00:04:51.574 TEST_HEADER include/spdk/nvme_intel.h 00:04:51.574 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:51.574 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:51.574 TEST_HEADER include/spdk/nvme_spec.h 00:04:51.574 TEST_HEADER include/spdk/nvme_zns.h 00:04:51.574 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:51.574 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:51.574 TEST_HEADER include/spdk/nvmf.h 00:04:51.574 CC test/app/bdev_svc/bdev_svc.o 00:04:51.574 TEST_HEADER include/spdk/nvmf_spec.h 00:04:51.574 TEST_HEADER include/spdk/nvmf_transport.h 00:04:51.574 TEST_HEADER include/spdk/opal.h 00:04:51.574 TEST_HEADER include/spdk/opal_spec.h 00:04:51.574 CC test/dma/test_dma/test_dma.o 00:04:51.574 TEST_HEADER include/spdk/pci_ids.h 00:04:51.574 TEST_HEADER include/spdk/pipe.h 00:04:51.574 TEST_HEADER include/spdk/queue.h 00:04:51.574 TEST_HEADER include/spdk/reduce.h 00:04:51.574 CC test/env/mem_callbacks/mem_callbacks.o 00:04:51.574 TEST_HEADER include/spdk/rpc.h 00:04:51.574 TEST_HEADER include/spdk/scheduler.h 00:04:51.574 TEST_HEADER include/spdk/scsi.h 00:04:51.574 TEST_HEADER include/spdk/scsi_spec.h 00:04:51.574 TEST_HEADER include/spdk/sock.h 00:04:51.574 TEST_HEADER include/spdk/stdinc.h 00:04:51.574 TEST_HEADER include/spdk/string.h 00:04:51.574 TEST_HEADER include/spdk/thread.h 00:04:51.574 TEST_HEADER include/spdk/trace.h 00:04:51.574 TEST_HEADER include/spdk/trace_parser.h 00:04:51.574 TEST_HEADER include/spdk/tree.h 00:04:51.574 TEST_HEADER include/spdk/ublk.h 00:04:51.574 TEST_HEADER include/spdk/util.h 00:04:51.574 TEST_HEADER include/spdk/uuid.h 00:04:51.574 TEST_HEADER include/spdk/version.h 00:04:51.574 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:51.574 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:51.574 LINK rpc_client_test 00:04:51.574 TEST_HEADER include/spdk/vhost.h 00:04:51.574 TEST_HEADER include/spdk/vmd.h 00:04:51.574 TEST_HEADER include/spdk/xor.h 00:04:51.574 TEST_HEADER include/spdk/zipf.h 00:04:51.574 CXX test/cpp_headers/accel.o 00:04:51.574 LINK zipf 00:04:51.574 LINK poller_perf 00:04:51.574 LINK spdk_trace_record 00:04:51.574 LINK ioat_perf 00:04:51.574 LINK bdev_svc 00:04:51.574 CXX test/cpp_headers/accel_module.o 00:04:51.833 CC examples/ioat/verify/verify.o 00:04:51.833 LINK spdk_trace 00:04:51.833 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:51.833 CXX test/cpp_headers/assert.o 00:04:51.833 CC examples/thread/thread/thread_ex.o 00:04:51.833 CC examples/sock/hello_world/hello_sock.o 00:04:51.833 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:51.833 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:51.833 LINK verify 00:04:52.163 LINK interrupt_tgt 00:04:52.163 LINK test_dma 00:04:52.163 CC app/nvmf_tgt/nvmf_main.o 00:04:52.163 CXX test/cpp_headers/barrier.o 00:04:52.163 LINK mem_callbacks 00:04:52.163 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:52.163 CXX test/cpp_headers/base64.o 00:04:52.163 LINK thread 00:04:52.163 LINK hello_sock 00:04:52.163 CC test/env/vtophys/vtophys.o 00:04:52.163 LINK nvmf_tgt 00:04:52.163 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:52.163 CXX test/cpp_headers/bdev.o 00:04:52.453 LINK nvme_fuzz 00:04:52.453 CC examples/vmd/lsvmd/lsvmd.o 00:04:52.453 CC app/iscsi_tgt/iscsi_tgt.o 00:04:52.453 LINK vtophys 00:04:52.453 CC examples/vmd/led/led.o 00:04:52.453 CXX test/cpp_headers/bdev_module.o 00:04:52.453 CC examples/idxd/perf/perf.o 00:04:52.453 LINK lsvmd 00:04:52.453 LINK iscsi_tgt 00:04:52.453 CC test/app/histogram_perf/histogram_perf.o 00:04:52.453 LINK led 00:04:52.453 CC app/spdk_tgt/spdk_tgt.o 00:04:52.453 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:52.713 CXX test/cpp_headers/bdev_zone.o 00:04:52.713 CC test/env/memory/memory_ut.o 00:04:52.713 LINK vhost_fuzz 00:04:52.713 LINK histogram_perf 00:04:52.713 LINK env_dpdk_post_init 00:04:52.713 LINK spdk_tgt 00:04:52.713 CC app/spdk_lspci/spdk_lspci.o 00:04:52.713 CC app/spdk_nvme_perf/perf.o 00:04:52.713 CXX test/cpp_headers/bit_array.o 00:04:52.713 LINK idxd_perf 00:04:52.713 CC test/env/pci/pci_ut.o 00:04:52.713 CC test/app/jsoncat/jsoncat.o 00:04:52.973 LINK spdk_lspci 00:04:52.973 CXX test/cpp_headers/bit_pool.o 00:04:52.973 CC app/spdk_nvme_identify/identify.o 00:04:52.973 CC test/event/event_perf/event_perf.o 00:04:52.973 LINK jsoncat 00:04:52.973 CC examples/nvme/hello_world/hello_world.o 00:04:52.973 CXX test/cpp_headers/blob_bdev.o 00:04:52.973 CXX test/cpp_headers/blobfs_bdev.o 00:04:52.973 LINK event_perf 00:04:53.235 CC test/event/reactor/reactor.o 00:04:53.235 LINK pci_ut 00:04:53.235 LINK reactor 00:04:53.235 CXX test/cpp_headers/blobfs.o 00:04:53.235 LINK hello_world 00:04:53.235 CC examples/nvme/reconnect/reconnect.o 00:04:53.235 CC test/nvme/aer/aer.o 00:04:53.493 CXX test/cpp_headers/blob.o 00:04:53.493 CC test/event/reactor_perf/reactor_perf.o 00:04:53.493 CC test/nvme/reset/reset.o 00:04:53.493 CC test/nvme/sgl/sgl.o 00:04:53.493 LINK memory_ut 00:04:53.493 CXX test/cpp_headers/conf.o 00:04:53.493 LINK spdk_nvme_perf 00:04:53.493 LINK reactor_perf 00:04:53.493 LINK aer 00:04:53.753 LINK reconnect 00:04:53.753 LINK iscsi_fuzz 00:04:53.753 CXX test/cpp_headers/config.o 00:04:53.753 LINK sgl 00:04:53.753 CXX test/cpp_headers/cpuset.o 00:04:53.753 CXX test/cpp_headers/crc16.o 00:04:53.753 LINK reset 00:04:53.753 CC test/nvme/e2edp/nvme_dp.o 00:04:53.753 CC test/event/app_repeat/app_repeat.o 00:04:53.753 LINK spdk_nvme_identify 00:04:53.753 CXX test/cpp_headers/crc32.o 00:04:54.013 CXX test/cpp_headers/crc64.o 00:04:54.013 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:54.013 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:54.013 CC test/app/stub/stub.o 00:04:54.013 LINK app_repeat 00:04:54.013 CXX test/cpp_headers/dif.o 00:04:54.013 CC test/event/scheduler/scheduler.o 00:04:54.013 LINK nvme_dp 00:04:54.013 CC app/spdk_nvme_discover/discovery_aer.o 00:04:54.013 CC examples/accel/perf/accel_perf.o 00:04:54.274 CXX test/cpp_headers/dma.o 00:04:54.274 LINK stub 00:04:54.274 LINK hello_fsdev 00:04:54.274 LINK scheduler 00:04:54.274 LINK spdk_nvme_discover 00:04:54.274 CC test/nvme/overhead/overhead.o 00:04:54.274 CC examples/blob/hello_world/hello_blob.o 00:04:54.274 CXX test/cpp_headers/endian.o 00:04:54.274 CC examples/blob/cli/blobcli.o 00:04:54.274 CXX test/cpp_headers/env_dpdk.o 00:04:54.536 LINK nvme_manage 00:04:54.536 CXX test/cpp_headers/env.o 00:04:54.536 CXX test/cpp_headers/event.o 00:04:54.536 CXX test/cpp_headers/fd_group.o 00:04:54.536 CC app/spdk_top/spdk_top.o 00:04:54.536 LINK hello_blob 00:04:54.536 CC examples/nvme/arbitration/arbitration.o 00:04:54.536 CXX test/cpp_headers/fd.o 00:04:54.536 CXX test/cpp_headers/file.o 00:04:54.536 LINK overhead 00:04:54.536 CXX test/cpp_headers/fsdev.o 00:04:54.536 CXX test/cpp_headers/fsdev_module.o 00:04:54.536 LINK accel_perf 00:04:54.536 CXX test/cpp_headers/ftl.o 00:04:54.795 CXX test/cpp_headers/fuse_dispatcher.o 00:04:54.795 CXX test/cpp_headers/gpt_spec.o 00:04:54.795 CXX test/cpp_headers/hexlify.o 00:04:54.795 CC test/nvme/err_injection/err_injection.o 00:04:54.795 CC test/nvme/startup/startup.o 00:04:54.795 LINK blobcli 00:04:54.795 LINK arbitration 00:04:54.795 CXX test/cpp_headers/histogram_data.o 00:04:54.795 CC examples/nvme/hotplug/hotplug.o 00:04:54.795 LINK err_injection 00:04:55.054 CC app/spdk_dd/spdk_dd.o 00:04:55.054 CC app/vhost/vhost.o 00:04:55.054 CC examples/bdev/hello_world/hello_bdev.o 00:04:55.054 CXX test/cpp_headers/idxd.o 00:04:55.054 LINK startup 00:04:55.054 CXX test/cpp_headers/idxd_spec.o 00:04:55.054 LINK hotplug 00:04:55.054 LINK vhost 00:04:55.054 CXX test/cpp_headers/init.o 00:04:55.315 CC test/nvme/reserve/reserve.o 00:04:55.315 LINK hello_bdev 00:04:55.315 CC test/nvme/simple_copy/simple_copy.o 00:04:55.315 CC test/accel/dif/dif.o 00:04:55.315 CC test/blobfs/mkfs/mkfs.o 00:04:55.315 CXX test/cpp_headers/ioat.o 00:04:55.315 LINK spdk_dd 00:04:55.315 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:55.315 CC examples/nvme/abort/abort.o 00:04:55.315 LINK reserve 00:04:55.315 LINK mkfs 00:04:55.316 LINK spdk_top 00:04:55.316 CXX test/cpp_headers/ioat_spec.o 00:04:55.316 LINK simple_copy 00:04:55.640 LINK cmb_copy 00:04:55.640 CC examples/bdev/bdevperf/bdevperf.o 00:04:55.640 CXX test/cpp_headers/iscsi_spec.o 00:04:55.640 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:55.640 CC test/nvme/connect_stress/connect_stress.o 00:04:55.640 CC test/nvme/boot_partition/boot_partition.o 00:04:55.640 CXX test/cpp_headers/json.o 00:04:55.640 CC app/fio/nvme/fio_plugin.o 00:04:55.640 LINK abort 00:04:55.640 CXX test/cpp_headers/jsonrpc.o 00:04:55.640 LINK pmr_persistence 00:04:55.901 LINK connect_stress 00:04:55.901 CC test/lvol/esnap/esnap.o 00:04:55.901 LINK boot_partition 00:04:55.901 CXX test/cpp_headers/keyring.o 00:04:55.901 CXX test/cpp_headers/keyring_module.o 00:04:55.901 LINK dif 00:04:55.901 CXX test/cpp_headers/likely.o 00:04:55.901 CXX test/cpp_headers/log.o 00:04:55.901 CXX test/cpp_headers/lvol.o 00:04:55.901 CC test/nvme/fused_ordering/fused_ordering.o 00:04:55.901 CC test/nvme/compliance/nvme_compliance.o 00:04:55.901 CXX test/cpp_headers/md5.o 00:04:56.163 CXX test/cpp_headers/memory.o 00:04:56.163 CXX test/cpp_headers/mmio.o 00:04:56.163 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:56.163 LINK fused_ordering 00:04:56.163 CXX test/cpp_headers/nbd.o 00:04:56.163 CC test/nvme/fdp/fdp.o 00:04:56.163 CC test/nvme/cuse/cuse.o 00:04:56.163 CXX test/cpp_headers/net.o 00:04:56.423 LINK spdk_nvme 00:04:56.423 CC app/fio/bdev/fio_plugin.o 00:04:56.423 CXX test/cpp_headers/notify.o 00:04:56.423 LINK nvme_compliance 00:04:56.423 LINK doorbell_aers 00:04:56.423 LINK bdevperf 00:04:56.423 CXX test/cpp_headers/nvme.o 00:04:56.423 CXX test/cpp_headers/nvme_intel.o 00:04:56.423 CXX test/cpp_headers/nvme_ocssd.o 00:04:56.423 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:56.423 CXX test/cpp_headers/nvme_spec.o 00:04:56.423 CXX test/cpp_headers/nvme_zns.o 00:04:56.685 LINK fdp 00:04:56.685 CXX test/cpp_headers/nvmf_cmd.o 00:04:56.685 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:56.685 CC test/bdev/bdevio/bdevio.o 00:04:56.685 CXX test/cpp_headers/nvmf.o 00:04:56.685 CXX test/cpp_headers/nvmf_spec.o 00:04:56.685 CC examples/nvmf/nvmf/nvmf.o 00:04:56.685 CXX test/cpp_headers/nvmf_transport.o 00:04:56.685 CXX test/cpp_headers/opal.o 00:04:56.685 LINK spdk_bdev 00:04:56.947 CXX test/cpp_headers/opal_spec.o 00:04:56.947 CXX test/cpp_headers/pci_ids.o 00:04:56.947 CXX test/cpp_headers/pipe.o 00:04:56.947 CXX test/cpp_headers/queue.o 00:04:56.947 CXX test/cpp_headers/reduce.o 00:04:56.947 CXX test/cpp_headers/rpc.o 00:04:56.947 CXX test/cpp_headers/scheduler.o 00:04:56.947 CXX test/cpp_headers/scsi.o 00:04:56.947 CXX test/cpp_headers/scsi_spec.o 00:04:56.947 CXX test/cpp_headers/sock.o 00:04:56.947 LINK nvmf 00:04:56.947 LINK bdevio 00:04:57.208 CXX test/cpp_headers/stdinc.o 00:04:57.208 CXX test/cpp_headers/string.o 00:04:57.208 CXX test/cpp_headers/thread.o 00:04:57.208 CXX test/cpp_headers/trace.o 00:04:57.208 CXX test/cpp_headers/trace_parser.o 00:04:57.208 CXX test/cpp_headers/tree.o 00:04:57.208 CXX test/cpp_headers/ublk.o 00:04:57.208 CXX test/cpp_headers/util.o 00:04:57.208 CXX test/cpp_headers/uuid.o 00:04:57.208 CXX test/cpp_headers/version.o 00:04:57.208 CXX test/cpp_headers/vfio_user_pci.o 00:04:57.208 CXX test/cpp_headers/vfio_user_spec.o 00:04:57.208 CXX test/cpp_headers/vhost.o 00:04:57.208 CXX test/cpp_headers/vmd.o 00:04:57.208 CXX test/cpp_headers/xor.o 00:04:57.208 CXX test/cpp_headers/zipf.o 00:04:57.470 LINK cuse 00:05:01.739 LINK esnap 00:05:01.739 00:05:01.739 real 1m18.724s 00:05:01.739 user 7m1.373s 00:05:01.739 sys 1m18.036s 00:05:01.739 19:23:28 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:01.739 19:23:28 make -- common/autotest_common.sh@10 -- $ set +x 00:05:01.739 ************************************ 00:05:01.739 END TEST make 00:05:01.739 ************************************ 00:05:01.739 19:23:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:01.739 19:23:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:01.739 19:23:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:01.739 19:23:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.739 19:23:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:01.739 19:23:28 -- pm/common@44 -- $ pid=5071 00:05:01.739 19:23:28 -- pm/common@50 -- $ kill -TERM 5071 00:05:01.739 19:23:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.739 19:23:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:01.739 19:23:28 -- pm/common@44 -- $ pid=5072 00:05:01.739 19:23:28 -- pm/common@50 -- $ kill -TERM 5072 00:05:01.739 19:23:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:01.739 19:23:28 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:01.739 19:23:28 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.739 19:23:28 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.739 19:23:28 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.001 19:23:29 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.001 19:23:29 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.001 19:23:29 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.001 19:23:29 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.001 19:23:29 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.001 19:23:29 -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.001 19:23:29 -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.001 19:23:29 -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.001 19:23:29 -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.001 19:23:29 -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.001 19:23:29 -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.001 19:23:29 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.001 19:23:29 -- scripts/common.sh@344 -- # case "$op" in 00:05:02.001 19:23:29 -- scripts/common.sh@345 -- # : 1 00:05:02.001 19:23:29 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.001 19:23:29 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.001 19:23:29 -- scripts/common.sh@365 -- # decimal 1 00:05:02.001 19:23:29 -- scripts/common.sh@353 -- # local d=1 00:05:02.001 19:23:29 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.001 19:23:29 -- scripts/common.sh@355 -- # echo 1 00:05:02.001 19:23:29 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.001 19:23:29 -- scripts/common.sh@366 -- # decimal 2 00:05:02.001 19:23:29 -- scripts/common.sh@353 -- # local d=2 00:05:02.001 19:23:29 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.001 19:23:29 -- scripts/common.sh@355 -- # echo 2 00:05:02.001 19:23:29 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.001 19:23:29 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.001 19:23:29 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.001 19:23:29 -- scripts/common.sh@368 -- # return 0 00:05:02.001 19:23:29 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.001 19:23:29 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.001 --rc genhtml_branch_coverage=1 00:05:02.001 --rc genhtml_function_coverage=1 00:05:02.001 --rc genhtml_legend=1 00:05:02.001 --rc geninfo_all_blocks=1 00:05:02.001 --rc geninfo_unexecuted_blocks=1 00:05:02.001 00:05:02.001 ' 00:05:02.001 19:23:29 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.001 --rc genhtml_branch_coverage=1 00:05:02.001 --rc genhtml_function_coverage=1 00:05:02.001 --rc genhtml_legend=1 00:05:02.001 --rc geninfo_all_blocks=1 00:05:02.001 --rc geninfo_unexecuted_blocks=1 00:05:02.001 00:05:02.001 ' 00:05:02.001 19:23:29 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.001 --rc genhtml_branch_coverage=1 00:05:02.001 --rc genhtml_function_coverage=1 00:05:02.001 --rc genhtml_legend=1 00:05:02.001 --rc geninfo_all_blocks=1 00:05:02.001 --rc geninfo_unexecuted_blocks=1 00:05:02.001 00:05:02.001 ' 00:05:02.001 19:23:29 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.002 --rc genhtml_branch_coverage=1 00:05:02.002 --rc genhtml_function_coverage=1 00:05:02.002 --rc genhtml_legend=1 00:05:02.002 --rc geninfo_all_blocks=1 00:05:02.002 --rc geninfo_unexecuted_blocks=1 00:05:02.002 00:05:02.002 ' 00:05:02.002 19:23:29 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.002 19:23:29 -- nvmf/common.sh@7 -- # uname -s 00:05:02.002 19:23:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.002 19:23:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.002 19:23:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.002 19:23:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.002 19:23:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.002 19:23:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.002 19:23:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.002 19:23:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.002 19:23:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.002 19:23:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.002 19:23:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:672327fd-94cc-407c-a6be-ea572201c4d7 00:05:02.002 19:23:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=672327fd-94cc-407c-a6be-ea572201c4d7 00:05:02.002 19:23:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.002 19:23:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.002 19:23:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.002 19:23:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.002 19:23:29 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.002 19:23:29 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.002 19:23:29 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.002 19:23:29 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.002 19:23:29 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.002 19:23:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.002 19:23:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.002 19:23:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.002 19:23:29 -- paths/export.sh@5 -- # export PATH 00:05:02.002 19:23:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.002 19:23:29 -- nvmf/common.sh@51 -- # : 0 00:05:02.002 19:23:29 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.002 19:23:29 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.002 19:23:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.002 19:23:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.002 19:23:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.002 19:23:29 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.002 19:23:29 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.002 19:23:29 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.002 19:23:29 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.002 19:23:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:02.002 19:23:29 -- spdk/autotest.sh@32 -- # uname -s 00:05:02.002 19:23:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:02.002 19:23:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:02.002 19:23:29 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:02.002 19:23:29 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:02.002 19:23:29 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:02.002 19:23:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:02.002 19:23:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:02.002 19:23:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:02.002 19:23:29 -- spdk/autotest.sh@48 -- # udevadm_pid=54417 00:05:02.002 19:23:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:02.002 19:23:29 -- pm/common@17 -- # local monitor 00:05:02.002 19:23:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.002 19:23:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:02.002 19:23:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.002 19:23:29 -- pm/common@25 -- # sleep 1 00:05:02.002 19:23:29 -- pm/common@21 -- # date +%s 00:05:02.002 19:23:29 -- pm/common@21 -- # date +%s 00:05:02.002 19:23:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733426609 00:05:02.002 19:23:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733426609 00:05:02.002 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733426609_collect-cpu-load.pm.log 00:05:02.002 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733426609_collect-vmstat.pm.log 00:05:02.946 19:23:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:02.946 19:23:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:02.946 19:23:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.946 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:05:02.946 19:23:30 -- spdk/autotest.sh@59 -- # create_test_list 00:05:02.946 19:23:30 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:02.947 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:05:02.947 19:23:30 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:02.947 19:23:30 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:02.947 19:23:30 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:02.947 19:23:30 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:02.947 19:23:30 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:02.947 19:23:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:02.947 19:23:30 -- common/autotest_common.sh@1457 -- # uname 00:05:02.947 19:23:30 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:02.947 19:23:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:02.947 19:23:30 -- common/autotest_common.sh@1477 -- # uname 00:05:02.947 19:23:30 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:02.947 19:23:30 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:02.947 19:23:30 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:03.306 lcov: LCOV version 1.15 00:05:03.306 19:23:30 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:18.213 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:18.213 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:33.161 19:24:00 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:33.161 19:24:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.161 19:24:00 -- common/autotest_common.sh@10 -- # set +x 00:05:33.161 19:24:00 -- spdk/autotest.sh@78 -- # rm -f 00:05:33.161 19:24:00 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:33.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.727 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:33.727 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:33.727 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:33.727 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:33.727 19:24:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:33.727 19:24:00 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:33.727 19:24:00 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:33.727 19:24:00 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:33.727 19:24:00 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:33.727 19:24:00 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:33.727 19:24:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:33.727 19:24:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:33.727 19:24:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:33.727 19:24:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:33.727 19:24:00 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:33.727 19:24:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:33.727 19:24:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:33.727 19:24:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:33.727 19:24:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:33.727 19:24:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:33.727 19:24:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:33.727 19:24:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:33.727 19:24:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:33.727 19:24:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:33.727 19:24:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:33.727 19:24:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:33.727 19:24:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:33.727 19:24:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:33.727 19:24:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:33.727 19:24:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:33.727 19:24:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:33.727 19:24:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:33.727 19:24:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:33.727 19:24:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:33.727 19:24:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:33.727 19:24:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:33.727 19:24:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:33.727 19:24:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:05:33.727 19:24:00 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:05:33.727 19:24:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:05:33.727 19:24:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:33.727 19:24:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:33.727 19:24:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:33.727 19:24:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:33.727 19:24:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:05:33.727 19:24:00 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:33.727 19:24:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:33.727 19:24:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:33.727 19:24:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:33.727 19:24:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.727 19:24:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:33.727 19:24:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:33.727 19:24:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:33.727 19:24:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:33.727 No valid GPT data, bailing 00:05:33.727 19:24:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:33.727 19:24:00 -- scripts/common.sh@394 -- # pt= 00:05:33.727 19:24:00 -- scripts/common.sh@395 -- # return 1 00:05:33.727 19:24:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:33.727 1+0 records in 00:05:33.727 1+0 records out 00:05:33.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00930743 s, 113 MB/s 00:05:33.727 19:24:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.727 19:24:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:33.727 19:24:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:33.727 19:24:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:33.727 19:24:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:33.988 No valid GPT data, bailing 00:05:33.988 19:24:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:33.988 19:24:01 -- scripts/common.sh@394 -- # pt= 00:05:33.988 19:24:01 -- scripts/common.sh@395 -- # return 1 00:05:33.988 19:24:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:33.988 1+0 records in 00:05:33.988 1+0 records out 00:05:33.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512829 s, 204 MB/s 00:05:33.988 19:24:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.988 19:24:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:33.988 19:24:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:33.988 19:24:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:33.988 19:24:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:33.988 No valid GPT data, bailing 00:05:33.988 19:24:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:33.988 19:24:01 -- scripts/common.sh@394 -- # pt= 00:05:33.988 19:24:01 -- scripts/common.sh@395 -- # return 1 00:05:33.988 19:24:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:33.988 1+0 records in 00:05:33.988 1+0 records out 00:05:33.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00323879 s, 324 MB/s 00:05:33.988 19:24:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.988 19:24:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:33.988 19:24:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:33.988 19:24:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:33.988 19:24:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:33.988 No valid GPT data, bailing 00:05:33.988 19:24:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:33.988 19:24:01 -- scripts/common.sh@394 -- # pt= 00:05:33.988 19:24:01 -- scripts/common.sh@395 -- # return 1 00:05:33.988 19:24:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:33.988 1+0 records in 00:05:33.988 1+0 records out 00:05:33.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00624559 s, 168 MB/s 00:05:33.988 19:24:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.988 19:24:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:33.988 19:24:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:33.988 19:24:01 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:33.988 19:24:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:33.988 No valid GPT data, bailing 00:05:33.988 19:24:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:34.248 19:24:01 -- scripts/common.sh@394 -- # pt= 00:05:34.248 19:24:01 -- scripts/common.sh@395 -- # return 1 00:05:34.248 19:24:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:34.248 1+0 records in 00:05:34.248 1+0 records out 00:05:34.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00682267 s, 154 MB/s 00:05:34.248 19:24:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:34.248 19:24:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:34.248 19:24:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:34.248 19:24:01 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:34.248 19:24:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:34.248 No valid GPT data, bailing 00:05:34.248 19:24:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:34.248 19:24:01 -- scripts/common.sh@394 -- # pt= 00:05:34.248 19:24:01 -- scripts/common.sh@395 -- # return 1 00:05:34.248 19:24:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:34.248 1+0 records in 00:05:34.248 1+0 records out 00:05:34.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653214 s, 161 MB/s 00:05:34.248 19:24:01 -- spdk/autotest.sh@105 -- # sync 00:05:34.248 19:24:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:34.248 19:24:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:34.248 19:24:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:36.177 19:24:03 -- spdk/autotest.sh@111 -- # uname -s 00:05:36.177 19:24:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:36.177 19:24:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:36.177 19:24:03 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:36.438 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.699 Hugepages 00:05:36.699 node hugesize free / total 00:05:36.699 node0 1048576kB 0 / 0 00:05:36.699 node0 2048kB 0 / 0 00:05:36.699 00:05:36.699 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.970 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:36.970 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:36.970 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:36.970 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:37.269 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:37.269 19:24:04 -- spdk/autotest.sh@117 -- # uname -s 00:05:37.269 19:24:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:37.269 19:24:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:37.269 19:24:04 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.103 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.103 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.103 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.103 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.103 19:24:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:39.045 19:24:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:39.045 19:24:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:39.045 19:24:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:39.045 19:24:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:39.045 19:24:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:39.045 19:24:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:39.045 19:24:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:39.045 19:24:06 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:39.045 19:24:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:39.305 19:24:06 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:39.305 19:24:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:39.305 19:24:06 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.566 Waiting for block devices as requested 00:05:39.826 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:39.826 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:39.826 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:40.087 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:45.373 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:45.373 19:24:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:45.373 19:24:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:45.373 19:24:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:45.373 19:24:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:45.373 19:24:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:45.373 19:24:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:45.373 19:24:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:45.373 19:24:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:45.373 19:24:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:45.373 19:24:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:45.373 19:24:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:45.373 19:24:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:45.373 19:24:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:45.373 19:24:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:45.373 19:24:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:45.373 19:24:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:45.373 19:24:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:45.373 19:24:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1543 -- # continue 00:05:45.373 19:24:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:45.373 19:24:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:45.373 19:24:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:45.373 19:24:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:45.373 19:24:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:45.373 19:24:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:45.373 19:24:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:45.373 19:24:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:45.373 19:24:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:45.373 19:24:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:45.373 19:24:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:45.373 19:24:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:45.373 19:24:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:45.373 19:24:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:45.373 19:24:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:45.373 19:24:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:45.373 19:24:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:45.373 19:24:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1543 -- # continue 00:05:45.373 19:24:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:45.373 19:24:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:45.373 19:24:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:45.373 19:24:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:45.373 19:24:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:45.373 19:24:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:45.373 19:24:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:45.373 19:24:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:45.373 19:24:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:45.373 19:24:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:45.373 19:24:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:45.373 19:24:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:45.373 19:24:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:45.373 19:24:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:45.373 19:24:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:45.373 19:24:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:45.373 19:24:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:45.373 19:24:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1543 -- # continue 00:05:45.373 19:24:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:45.373 19:24:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:45.373 19:24:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:45.373 19:24:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:45.373 19:24:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:45.373 19:24:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:45.373 19:24:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:45.374 19:24:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:45.374 19:24:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:45.374 19:24:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:45.374 19:24:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:45.374 19:24:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:45.374 19:24:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:45.374 19:24:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:45.374 19:24:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:45.374 19:24:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:45.374 19:24:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:45.374 19:24:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:45.374 19:24:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:45.374 19:24:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:45.374 19:24:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:45.374 19:24:12 -- common/autotest_common.sh@1543 -- # continue 00:05:45.374 19:24:12 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:45.374 19:24:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.374 19:24:12 -- common/autotest_common.sh@10 -- # set +x 00:05:45.374 19:24:12 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:45.374 19:24:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.374 19:24:12 -- common/autotest_common.sh@10 -- # set +x 00:05:45.374 19:24:12 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.206 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.206 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.206 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.206 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.467 19:24:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:46.467 19:24:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.467 19:24:13 -- common/autotest_common.sh@10 -- # set +x 00:05:46.467 19:24:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:46.467 19:24:13 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:46.467 19:24:13 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:46.467 19:24:13 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:46.467 19:24:13 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:46.467 19:24:13 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:46.467 19:24:13 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:46.467 19:24:13 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:46.467 19:24:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:46.467 19:24:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:46.467 19:24:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:46.467 19:24:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:46.467 19:24:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:46.467 19:24:13 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:46.467 19:24:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:46.467 19:24:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:46.467 19:24:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:46.467 19:24:13 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:46.467 19:24:13 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:46.467 19:24:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:46.467 19:24:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:46.467 19:24:13 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:46.467 19:24:13 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:46.467 19:24:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:46.467 19:24:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:46.467 19:24:13 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:46.467 19:24:13 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:46.467 19:24:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:46.467 19:24:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:46.467 19:24:13 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:46.467 19:24:13 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:46.467 19:24:13 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:46.467 19:24:13 -- common/autotest_common.sh@1572 -- # return 0 00:05:46.467 19:24:13 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:46.467 19:24:13 -- common/autotest_common.sh@1580 -- # return 0 00:05:46.467 19:24:13 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:46.467 19:24:13 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:46.467 19:24:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:46.467 19:24:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:46.467 19:24:13 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:46.467 19:24:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.467 19:24:13 -- common/autotest_common.sh@10 -- # set +x 00:05:46.467 19:24:13 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:46.467 19:24:13 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:46.467 19:24:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.467 19:24:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.467 19:24:13 -- common/autotest_common.sh@10 -- # set +x 00:05:46.467 ************************************ 00:05:46.467 START TEST env 00:05:46.467 ************************************ 00:05:46.467 19:24:13 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:46.730 * Looking for test storage... 00:05:46.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:46.730 19:24:13 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.730 19:24:13 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.730 19:24:13 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.730 19:24:13 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.730 19:24:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.730 19:24:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.730 19:24:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.730 19:24:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.730 19:24:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.730 19:24:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.730 19:24:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.730 19:24:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.730 19:24:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.730 19:24:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.730 19:24:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.730 19:24:13 env -- scripts/common.sh@344 -- # case "$op" in 00:05:46.730 19:24:13 env -- scripts/common.sh@345 -- # : 1 00:05:46.730 19:24:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.730 19:24:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.730 19:24:13 env -- scripts/common.sh@365 -- # decimal 1 00:05:46.730 19:24:13 env -- scripts/common.sh@353 -- # local d=1 00:05:46.730 19:24:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.730 19:24:13 env -- scripts/common.sh@355 -- # echo 1 00:05:46.730 19:24:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.730 19:24:13 env -- scripts/common.sh@366 -- # decimal 2 00:05:46.730 19:24:13 env -- scripts/common.sh@353 -- # local d=2 00:05:46.730 19:24:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.730 19:24:13 env -- scripts/common.sh@355 -- # echo 2 00:05:46.730 19:24:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.730 19:24:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.730 19:24:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.730 19:24:13 env -- scripts/common.sh@368 -- # return 0 00:05:46.730 19:24:13 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.730 19:24:13 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.730 --rc genhtml_branch_coverage=1 00:05:46.730 --rc genhtml_function_coverage=1 00:05:46.730 --rc genhtml_legend=1 00:05:46.730 --rc geninfo_all_blocks=1 00:05:46.730 --rc geninfo_unexecuted_blocks=1 00:05:46.730 00:05:46.730 ' 00:05:46.730 19:24:13 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.730 --rc genhtml_branch_coverage=1 00:05:46.730 --rc genhtml_function_coverage=1 00:05:46.730 --rc genhtml_legend=1 00:05:46.730 --rc geninfo_all_blocks=1 00:05:46.730 --rc geninfo_unexecuted_blocks=1 00:05:46.730 00:05:46.730 ' 00:05:46.730 19:24:13 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.730 --rc genhtml_branch_coverage=1 00:05:46.730 --rc genhtml_function_coverage=1 00:05:46.730 --rc genhtml_legend=1 00:05:46.730 --rc geninfo_all_blocks=1 00:05:46.730 --rc geninfo_unexecuted_blocks=1 00:05:46.730 00:05:46.730 ' 00:05:46.730 19:24:13 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.730 --rc genhtml_branch_coverage=1 00:05:46.730 --rc genhtml_function_coverage=1 00:05:46.731 --rc genhtml_legend=1 00:05:46.731 --rc geninfo_all_blocks=1 00:05:46.731 --rc geninfo_unexecuted_blocks=1 00:05:46.731 00:05:46.731 ' 00:05:46.731 19:24:13 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:46.731 19:24:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.731 19:24:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.731 19:24:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.731 ************************************ 00:05:46.731 START TEST env_memory 00:05:46.731 ************************************ 00:05:46.731 19:24:13 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:46.731 00:05:46.731 00:05:46.731 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.731 http://cunit.sourceforge.net/ 00:05:46.731 00:05:46.731 00:05:46.731 Suite: memory 00:05:46.731 Test: alloc and free memory map ...[2024-12-05 19:24:13.917359] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:46.731 passed 00:05:46.731 Test: mem map translation ...[2024-12-05 19:24:13.956159] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:46.731 [2024-12-05 19:24:13.956223] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:46.731 [2024-12-05 19:24:13.956285] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:46.731 [2024-12-05 19:24:13.956300] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:46.992 passed 00:05:46.992 Test: mem map registration ...[2024-12-05 19:24:14.024325] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:46.992 [2024-12-05 19:24:14.024373] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:46.992 passed 00:05:46.992 Test: mem map adjacent registrations ...passed 00:05:46.992 00:05:46.992 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.992 suites 1 1 n/a 0 0 00:05:46.992 tests 4 4 4 0 0 00:05:46.992 asserts 152 152 152 0 n/a 00:05:46.992 00:05:46.992 Elapsed time = 0.233 seconds 00:05:46.992 00:05:46.992 real 0m0.266s 00:05:46.992 user 0m0.242s 00:05:46.992 sys 0m0.015s 00:05:46.992 19:24:14 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.992 ************************************ 00:05:46.992 END TEST env_memory 00:05:46.992 ************************************ 00:05:46.992 19:24:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:46.992 19:24:14 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:46.992 19:24:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.992 19:24:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.992 19:24:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.992 ************************************ 00:05:46.992 START TEST env_vtophys 00:05:46.992 ************************************ 00:05:46.992 19:24:14 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:46.992 EAL: lib.eal log level changed from notice to debug 00:05:46.992 EAL: Detected lcore 0 as core 0 on socket 0 00:05:46.992 EAL: Detected lcore 1 as core 0 on socket 0 00:05:46.992 EAL: Detected lcore 2 as core 0 on socket 0 00:05:46.992 EAL: Detected lcore 3 as core 0 on socket 0 00:05:46.992 EAL: Detected lcore 4 as core 0 on socket 0 00:05:46.992 EAL: Detected lcore 5 as core 0 on socket 0 00:05:46.992 EAL: Detected lcore 6 as core 0 on socket 0 00:05:46.992 EAL: Detected lcore 7 as core 0 on socket 0 00:05:46.992 EAL: Detected lcore 8 as core 0 on socket 0 00:05:46.992 EAL: Detected lcore 9 as core 0 on socket 0 00:05:46.992 EAL: Maximum logical cores by configuration: 128 00:05:46.992 EAL: Detected CPU lcores: 10 00:05:46.992 EAL: Detected NUMA nodes: 1 00:05:46.992 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:46.992 EAL: Detected shared linkage of DPDK 00:05:46.992 EAL: No shared files mode enabled, IPC will be disabled 00:05:46.992 EAL: Selected IOVA mode 'PA' 00:05:46.992 EAL: Probing VFIO support... 00:05:46.992 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:46.992 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:46.992 EAL: Ask a virtual area of 0x2e000 bytes 00:05:46.992 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:46.992 EAL: Setting up physically contiguous memory... 00:05:46.992 EAL: Setting maximum number of open files to 524288 00:05:46.992 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:46.992 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:46.992 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.992 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:47.255 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.255 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.255 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:47.255 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:47.255 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.255 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:47.255 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.255 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.255 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:47.255 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:47.255 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.255 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:47.255 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.255 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.255 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:47.255 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:47.255 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.255 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:47.255 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.255 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.255 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:47.255 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:47.255 EAL: Hugepages will be freed exactly as allocated. 00:05:47.255 EAL: No shared files mode enabled, IPC is disabled 00:05:47.255 EAL: No shared files mode enabled, IPC is disabled 00:05:47.255 EAL: TSC frequency is ~2600000 KHz 00:05:47.255 EAL: Main lcore 0 is ready (tid=7f23228dda40;cpuset=[0]) 00:05:47.255 EAL: Trying to obtain current memory policy. 00:05:47.255 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.255 EAL: Restoring previous memory policy: 0 00:05:47.255 EAL: request: mp_malloc_sync 00:05:47.255 EAL: No shared files mode enabled, IPC is disabled 00:05:47.255 EAL: Heap on socket 0 was expanded by 2MB 00:05:47.255 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:47.255 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:47.255 EAL: Mem event callback 'spdk:(nil)' registered 00:05:47.255 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:47.255 00:05:47.255 00:05:47.255 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.255 http://cunit.sourceforge.net/ 00:05:47.255 00:05:47.255 00:05:47.255 Suite: components_suite 00:05:47.519 Test: vtophys_malloc_test ...passed 00:05:47.519 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:47.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.520 EAL: Restoring previous memory policy: 4 00:05:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.520 EAL: request: mp_malloc_sync 00:05:47.520 EAL: No shared files mode enabled, IPC is disabled 00:05:47.520 EAL: Heap on socket 0 was expanded by 4MB 00:05:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.520 EAL: request: mp_malloc_sync 00:05:47.520 EAL: No shared files mode enabled, IPC is disabled 00:05:47.520 EAL: Heap on socket 0 was shrunk by 4MB 00:05:47.520 EAL: Trying to obtain current memory policy. 00:05:47.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.520 EAL: Restoring previous memory policy: 4 00:05:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.520 EAL: request: mp_malloc_sync 00:05:47.520 EAL: No shared files mode enabled, IPC is disabled 00:05:47.520 EAL: Heap on socket 0 was expanded by 6MB 00:05:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.520 EAL: request: mp_malloc_sync 00:05:47.520 EAL: No shared files mode enabled, IPC is disabled 00:05:47.520 EAL: Heap on socket 0 was shrunk by 6MB 00:05:47.520 EAL: Trying to obtain current memory policy. 00:05:47.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.520 EAL: Restoring previous memory policy: 4 00:05:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.520 EAL: request: mp_malloc_sync 00:05:47.520 EAL: No shared files mode enabled, IPC is disabled 00:05:47.520 EAL: Heap on socket 0 was expanded by 10MB 00:05:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.520 EAL: request: mp_malloc_sync 00:05:47.520 EAL: No shared files mode enabled, IPC is disabled 00:05:47.520 EAL: Heap on socket 0 was shrunk by 10MB 00:05:47.520 EAL: Trying to obtain current memory policy. 00:05:47.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.520 EAL: Restoring previous memory policy: 4 00:05:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.520 EAL: request: mp_malloc_sync 00:05:47.520 EAL: No shared files mode enabled, IPC is disabled 00:05:47.520 EAL: Heap on socket 0 was expanded by 18MB 00:05:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.520 EAL: request: mp_malloc_sync 00:05:47.520 EAL: No shared files mode enabled, IPC is disabled 00:05:47.520 EAL: Heap on socket 0 was shrunk by 18MB 00:05:47.520 EAL: Trying to obtain current memory policy. 00:05:47.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.520 EAL: Restoring previous memory policy: 4 00:05:47.520 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.520 EAL: request: mp_malloc_sync 00:05:47.520 EAL: No shared files mode enabled, IPC is disabled 00:05:47.520 EAL: Heap on socket 0 was expanded by 34MB 00:05:47.800 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.801 EAL: request: mp_malloc_sync 00:05:47.801 EAL: No shared files mode enabled, IPC is disabled 00:05:47.801 EAL: Heap on socket 0 was shrunk by 34MB 00:05:47.801 EAL: Trying to obtain current memory policy. 00:05:47.801 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.801 EAL: Restoring previous memory policy: 4 00:05:47.801 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.801 EAL: request: mp_malloc_sync 00:05:47.801 EAL: No shared files mode enabled, IPC is disabled 00:05:47.801 EAL: Heap on socket 0 was expanded by 66MB 00:05:47.801 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.801 EAL: request: mp_malloc_sync 00:05:47.801 EAL: No shared files mode enabled, IPC is disabled 00:05:47.801 EAL: Heap on socket 0 was shrunk by 66MB 00:05:47.801 EAL: Trying to obtain current memory policy. 00:05:47.801 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.801 EAL: Restoring previous memory policy: 4 00:05:47.801 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.801 EAL: request: mp_malloc_sync 00:05:47.801 EAL: No shared files mode enabled, IPC is disabled 00:05:47.801 EAL: Heap on socket 0 was expanded by 130MB 00:05:48.072 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.072 EAL: request: mp_malloc_sync 00:05:48.072 EAL: No shared files mode enabled, IPC is disabled 00:05:48.072 EAL: Heap on socket 0 was shrunk by 130MB 00:05:48.072 EAL: Trying to obtain current memory policy. 00:05:48.072 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.335 EAL: Restoring previous memory policy: 4 00:05:48.335 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.335 EAL: request: mp_malloc_sync 00:05:48.335 EAL: No shared files mode enabled, IPC is disabled 00:05:48.335 EAL: Heap on socket 0 was expanded by 258MB 00:05:48.596 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.596 EAL: request: mp_malloc_sync 00:05:48.596 EAL: No shared files mode enabled, IPC is disabled 00:05:48.596 EAL: Heap on socket 0 was shrunk by 258MB 00:05:48.858 EAL: Trying to obtain current memory policy. 00:05:48.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.858 EAL: Restoring previous memory policy: 4 00:05:48.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.858 EAL: request: mp_malloc_sync 00:05:48.858 EAL: No shared files mode enabled, IPC is disabled 00:05:48.858 EAL: Heap on socket 0 was expanded by 514MB 00:05:49.431 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.431 EAL: request: mp_malloc_sync 00:05:49.431 EAL: No shared files mode enabled, IPC is disabled 00:05:49.431 EAL: Heap on socket 0 was shrunk by 514MB 00:05:50.019 EAL: Trying to obtain current memory policy. 00:05:50.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.282 EAL: Restoring previous memory policy: 4 00:05:50.282 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.282 EAL: request: mp_malloc_sync 00:05:50.282 EAL: No shared files mode enabled, IPC is disabled 00:05:50.282 EAL: Heap on socket 0 was expanded by 1026MB 00:05:51.225 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.487 EAL: request: mp_malloc_sync 00:05:51.487 EAL: No shared files mode enabled, IPC is disabled 00:05:51.487 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:52.431 passed 00:05:52.431 00:05:52.431 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.431 suites 1 1 n/a 0 0 00:05:52.431 tests 2 2 2 0 0 00:05:52.431 asserts 5635 5635 5635 0 n/a 00:05:52.431 00:05:52.431 Elapsed time = 5.138 seconds 00:05:52.431 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.431 EAL: request: mp_malloc_sync 00:05:52.431 EAL: No shared files mode enabled, IPC is disabled 00:05:52.431 EAL: Heap on socket 0 was shrunk by 2MB 00:05:52.431 EAL: No shared files mode enabled, IPC is disabled 00:05:52.431 EAL: No shared files mode enabled, IPC is disabled 00:05:52.431 EAL: No shared files mode enabled, IPC is disabled 00:05:52.431 00:05:52.431 real 0m5.412s 00:05:52.431 user 0m4.593s 00:05:52.431 sys 0m0.670s 00:05:52.431 19:24:19 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.431 ************************************ 00:05:52.431 END TEST env_vtophys 00:05:52.431 ************************************ 00:05:52.431 19:24:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:52.431 19:24:19 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:52.431 19:24:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.431 19:24:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.431 19:24:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.431 ************************************ 00:05:52.431 START TEST env_pci 00:05:52.431 ************************************ 00:05:52.431 19:24:19 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:52.691 00:05:52.691 00:05:52.691 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.691 http://cunit.sourceforge.net/ 00:05:52.691 00:05:52.691 00:05:52.691 Suite: pci 00:05:52.691 Test: pci_hook ...[2024-12-05 19:24:19.695516] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57189 has claimed it 00:05:52.691 passed 00:05:52.691 00:05:52.691 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.691 suites 1 1 n/a 0 0 00:05:52.691 tests 1 1 1 0 0 00:05:52.691 asserts 25 25 25 0 n/a 00:05:52.691 00:05:52.691 Elapsed time = 0.005 seconds 00:05:52.691 EAL: Cannot find device (10000:00:01.0) 00:05:52.691 EAL: Failed to attach device on primary process 00:05:52.691 00:05:52.691 real 0m0.065s 00:05:52.691 user 0m0.027s 00:05:52.691 sys 0m0.037s 00:05:52.691 19:24:19 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.691 ************************************ 00:05:52.691 END TEST env_pci 00:05:52.691 ************************************ 00:05:52.691 19:24:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:52.691 19:24:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:52.691 19:24:19 env -- env/env.sh@15 -- # uname 00:05:52.691 19:24:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:52.691 19:24:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:52.691 19:24:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.691 19:24:19 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:52.691 19:24:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.691 19:24:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.691 ************************************ 00:05:52.691 START TEST env_dpdk_post_init 00:05:52.691 ************************************ 00:05:52.691 19:24:19 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.691 EAL: Detected CPU lcores: 10 00:05:52.691 EAL: Detected NUMA nodes: 1 00:05:52.691 EAL: Detected shared linkage of DPDK 00:05:52.691 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.691 EAL: Selected IOVA mode 'PA' 00:05:52.953 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.953 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:52.953 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:52.953 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:52.953 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:52.953 Starting DPDK initialization... 00:05:52.953 Starting SPDK post initialization... 00:05:52.953 SPDK NVMe probe 00:05:52.953 Attaching to 0000:00:10.0 00:05:52.953 Attaching to 0000:00:11.0 00:05:52.953 Attaching to 0000:00:12.0 00:05:52.953 Attaching to 0000:00:13.0 00:05:52.953 Attached to 0000:00:13.0 00:05:52.953 Attached to 0000:00:10.0 00:05:52.953 Attached to 0000:00:11.0 00:05:52.953 Attached to 0000:00:12.0 00:05:52.953 Cleaning up... 00:05:52.953 00:05:52.953 real 0m0.256s 00:05:52.953 user 0m0.082s 00:05:52.953 sys 0m0.075s 00:05:52.953 19:24:20 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.953 ************************************ 00:05:52.953 END TEST env_dpdk_post_init 00:05:52.953 ************************************ 00:05:52.953 19:24:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.953 19:24:20 env -- env/env.sh@26 -- # uname 00:05:52.953 19:24:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:52.953 19:24:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.953 19:24:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.953 19:24:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.953 19:24:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.953 ************************************ 00:05:52.953 START TEST env_mem_callbacks 00:05:52.953 ************************************ 00:05:52.953 19:24:20 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.953 EAL: Detected CPU lcores: 10 00:05:52.953 EAL: Detected NUMA nodes: 1 00:05:52.953 EAL: Detected shared linkage of DPDK 00:05:52.953 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.953 EAL: Selected IOVA mode 'PA' 00:05:53.214 00:05:53.214 00:05:53.214 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.214 http://cunit.sourceforge.net/ 00:05:53.214 00:05:53.214 00:05:53.214 Suite: memory 00:05:53.214 Test: test ... 00:05:53.214 register 0x200000200000 2097152 00:05:53.214 malloc 3145728 00:05:53.214 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:53.214 register 0x200000400000 4194304 00:05:53.214 buf 0x2000004fffc0 len 3145728 PASSED 00:05:53.214 malloc 64 00:05:53.214 buf 0x2000004ffec0 len 64 PASSED 00:05:53.214 malloc 4194304 00:05:53.214 register 0x200000800000 6291456 00:05:53.214 buf 0x2000009fffc0 len 4194304 PASSED 00:05:53.214 free 0x2000004fffc0 3145728 00:05:53.214 free 0x2000004ffec0 64 00:05:53.214 unregister 0x200000400000 4194304 PASSED 00:05:53.214 free 0x2000009fffc0 4194304 00:05:53.214 unregister 0x200000800000 6291456 PASSED 00:05:53.214 malloc 8388608 00:05:53.214 register 0x200000400000 10485760 00:05:53.214 buf 0x2000005fffc0 len 8388608 PASSED 00:05:53.214 free 0x2000005fffc0 8388608 00:05:53.214 unregister 0x200000400000 10485760 PASSED 00:05:53.214 passed 00:05:53.214 00:05:53.214 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.214 suites 1 1 n/a 0 0 00:05:53.214 tests 1 1 1 0 0 00:05:53.214 asserts 15 15 15 0 n/a 00:05:53.214 00:05:53.214 Elapsed time = 0.048 seconds 00:05:53.214 00:05:53.214 real 0m0.217s 00:05:53.214 user 0m0.066s 00:05:53.214 sys 0m0.049s 00:05:53.214 19:24:20 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.214 19:24:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:53.214 ************************************ 00:05:53.214 END TEST env_mem_callbacks 00:05:53.214 ************************************ 00:05:53.214 ************************************ 00:05:53.214 END TEST env 00:05:53.214 ************************************ 00:05:53.214 00:05:53.214 real 0m6.686s 00:05:53.214 user 0m5.169s 00:05:53.214 sys 0m1.060s 00:05:53.214 19:24:20 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.214 19:24:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.214 19:24:20 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:53.214 19:24:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.214 19:24:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.214 19:24:20 -- common/autotest_common.sh@10 -- # set +x 00:05:53.214 ************************************ 00:05:53.214 START TEST rpc 00:05:53.214 ************************************ 00:05:53.214 19:24:20 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:53.476 * Looking for test storage... 00:05:53.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.476 19:24:20 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:53.476 19:24:20 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:53.476 19:24:20 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:53.476 19:24:20 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:53.476 19:24:20 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.476 19:24:20 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.476 19:24:20 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.476 19:24:20 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.476 19:24:20 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.476 19:24:20 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.476 19:24:20 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.477 19:24:20 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.477 19:24:20 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.477 19:24:20 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.477 19:24:20 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.477 19:24:20 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.477 19:24:20 rpc -- scripts/common.sh@345 -- # : 1 00:05:53.477 19:24:20 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.477 19:24:20 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.477 19:24:20 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.477 19:24:20 rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.477 19:24:20 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.477 19:24:20 rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.477 19:24:20 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.477 19:24:20 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.477 19:24:20 rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.477 19:24:20 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.477 19:24:20 rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.477 19:24:20 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.477 19:24:20 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.477 19:24:20 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.477 19:24:20 rpc -- scripts/common.sh@368 -- # return 0 00:05:53.477 19:24:20 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.477 19:24:20 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:53.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.477 --rc genhtml_branch_coverage=1 00:05:53.477 --rc genhtml_function_coverage=1 00:05:53.477 --rc genhtml_legend=1 00:05:53.477 --rc geninfo_all_blocks=1 00:05:53.477 --rc geninfo_unexecuted_blocks=1 00:05:53.477 00:05:53.477 ' 00:05:53.477 19:24:20 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:53.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.477 --rc genhtml_branch_coverage=1 00:05:53.477 --rc genhtml_function_coverage=1 00:05:53.477 --rc genhtml_legend=1 00:05:53.477 --rc geninfo_all_blocks=1 00:05:53.477 --rc geninfo_unexecuted_blocks=1 00:05:53.477 00:05:53.477 ' 00:05:53.477 19:24:20 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:53.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.477 --rc genhtml_branch_coverage=1 00:05:53.477 --rc genhtml_function_coverage=1 00:05:53.477 --rc genhtml_legend=1 00:05:53.477 --rc geninfo_all_blocks=1 00:05:53.477 --rc geninfo_unexecuted_blocks=1 00:05:53.477 00:05:53.477 ' 00:05:53.477 19:24:20 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:53.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.477 --rc genhtml_branch_coverage=1 00:05:53.477 --rc genhtml_function_coverage=1 00:05:53.477 --rc genhtml_legend=1 00:05:53.477 --rc geninfo_all_blocks=1 00:05:53.477 --rc geninfo_unexecuted_blocks=1 00:05:53.477 00:05:53.477 ' 00:05:53.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.477 19:24:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57311 00:05:53.477 19:24:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.477 19:24:20 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:53.477 19:24:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57311 00:05:53.477 19:24:20 rpc -- common/autotest_common.sh@835 -- # '[' -z 57311 ']' 00:05:53.477 19:24:20 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.477 19:24:20 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.477 19:24:20 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.477 19:24:20 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.477 19:24:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.477 [2024-12-05 19:24:20.670479] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:05:53.477 [2024-12-05 19:24:20.670587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57311 ] 00:05:53.740 [2024-12-05 19:24:20.828216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.740 [2024-12-05 19:24:20.929328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:53.740 [2024-12-05 19:24:20.929385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57311' to capture a snapshot of events at runtime. 00:05:53.740 [2024-12-05 19:24:20.929396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.740 [2024-12-05 19:24:20.929405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.740 [2024-12-05 19:24:20.929413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57311 for offline analysis/debug. 00:05:53.740 [2024-12-05 19:24:20.930323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.312 19:24:21 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.312 19:24:21 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:54.312 19:24:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.312 19:24:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.312 19:24:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:54.312 19:24:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:54.312 19:24:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.312 19:24:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.312 19:24:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.312 ************************************ 00:05:54.312 START TEST rpc_integrity 00:05:54.312 ************************************ 00:05:54.312 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:54.312 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.312 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.312 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.312 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.312 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.573 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:54.573 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.573 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.573 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.573 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.573 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.573 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:54.573 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.573 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.573 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.573 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.573 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.573 { 00:05:54.573 "name": "Malloc0", 00:05:54.573 "aliases": [ 00:05:54.573 "47671330-e701-42d4-a1e9-2df025058e29" 00:05:54.573 ], 00:05:54.573 "product_name": "Malloc disk", 00:05:54.573 "block_size": 512, 00:05:54.573 "num_blocks": 16384, 00:05:54.573 "uuid": "47671330-e701-42d4-a1e9-2df025058e29", 00:05:54.573 "assigned_rate_limits": { 00:05:54.573 "rw_ios_per_sec": 0, 00:05:54.573 "rw_mbytes_per_sec": 0, 00:05:54.573 "r_mbytes_per_sec": 0, 00:05:54.573 "w_mbytes_per_sec": 0 00:05:54.573 }, 00:05:54.573 "claimed": false, 00:05:54.573 "zoned": false, 00:05:54.573 "supported_io_types": { 00:05:54.573 "read": true, 00:05:54.573 "write": true, 00:05:54.573 "unmap": true, 00:05:54.573 "flush": true, 00:05:54.573 "reset": true, 00:05:54.573 "nvme_admin": false, 00:05:54.573 "nvme_io": false, 00:05:54.573 "nvme_io_md": false, 00:05:54.573 "write_zeroes": true, 00:05:54.573 "zcopy": true, 00:05:54.573 "get_zone_info": false, 00:05:54.573 "zone_management": false, 00:05:54.573 "zone_append": false, 00:05:54.573 "compare": false, 00:05:54.573 "compare_and_write": false, 00:05:54.573 "abort": true, 00:05:54.573 "seek_hole": false, 00:05:54.573 "seek_data": false, 00:05:54.573 "copy": true, 00:05:54.573 "nvme_iov_md": false 00:05:54.573 }, 00:05:54.573 "memory_domains": [ 00:05:54.573 { 00:05:54.573 "dma_device_id": "system", 00:05:54.573 "dma_device_type": 1 00:05:54.573 }, 00:05:54.573 { 00:05:54.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.573 "dma_device_type": 2 00:05:54.573 } 00:05:54.573 ], 00:05:54.573 "driver_specific": {} 00:05:54.573 } 00:05:54.573 ]' 00:05:54.573 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:54.573 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.573 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:54.573 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.573 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.573 [2024-12-05 19:24:21.690625] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:54.573 [2024-12-05 19:24:21.690694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.573 [2024-12-05 19:24:21.690721] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:54.573 [2024-12-05 19:24:21.690733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.573 [2024-12-05 19:24:21.692959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.573 [2024-12-05 19:24:21.693001] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.573 Passthru0 00:05:54.573 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.573 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.573 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.573 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.573 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.573 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.573 { 00:05:54.573 "name": "Malloc0", 00:05:54.573 "aliases": [ 00:05:54.573 "47671330-e701-42d4-a1e9-2df025058e29" 00:05:54.573 ], 00:05:54.573 "product_name": "Malloc disk", 00:05:54.573 "block_size": 512, 00:05:54.573 "num_blocks": 16384, 00:05:54.573 "uuid": "47671330-e701-42d4-a1e9-2df025058e29", 00:05:54.573 "assigned_rate_limits": { 00:05:54.573 "rw_ios_per_sec": 0, 00:05:54.573 "rw_mbytes_per_sec": 0, 00:05:54.573 "r_mbytes_per_sec": 0, 00:05:54.573 "w_mbytes_per_sec": 0 00:05:54.573 }, 00:05:54.573 "claimed": true, 00:05:54.573 "claim_type": "exclusive_write", 00:05:54.573 "zoned": false, 00:05:54.573 "supported_io_types": { 00:05:54.573 "read": true, 00:05:54.573 "write": true, 00:05:54.573 "unmap": true, 00:05:54.573 "flush": true, 00:05:54.573 "reset": true, 00:05:54.573 "nvme_admin": false, 00:05:54.573 "nvme_io": false, 00:05:54.573 "nvme_io_md": false, 00:05:54.573 "write_zeroes": true, 00:05:54.573 "zcopy": true, 00:05:54.573 "get_zone_info": false, 00:05:54.573 "zone_management": false, 00:05:54.573 "zone_append": false, 00:05:54.573 "compare": false, 00:05:54.573 "compare_and_write": false, 00:05:54.573 "abort": true, 00:05:54.573 "seek_hole": false, 00:05:54.573 "seek_data": false, 00:05:54.573 "copy": true, 00:05:54.573 "nvme_iov_md": false 00:05:54.573 }, 00:05:54.573 "memory_domains": [ 00:05:54.573 { 00:05:54.573 "dma_device_id": "system", 00:05:54.574 "dma_device_type": 1 00:05:54.574 }, 00:05:54.574 { 00:05:54.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.574 "dma_device_type": 2 00:05:54.574 } 00:05:54.574 ], 00:05:54.574 "driver_specific": {} 00:05:54.574 }, 00:05:54.574 { 00:05:54.574 "name": "Passthru0", 00:05:54.574 "aliases": [ 00:05:54.574 "214b6b75-8289-50dd-bfdf-ed6f60916c85" 00:05:54.574 ], 00:05:54.574 "product_name": "passthru", 00:05:54.574 "block_size": 512, 00:05:54.574 "num_blocks": 16384, 00:05:54.574 "uuid": "214b6b75-8289-50dd-bfdf-ed6f60916c85", 00:05:54.574 "assigned_rate_limits": { 00:05:54.574 "rw_ios_per_sec": 0, 00:05:54.574 "rw_mbytes_per_sec": 0, 00:05:54.574 "r_mbytes_per_sec": 0, 00:05:54.574 "w_mbytes_per_sec": 0 00:05:54.574 }, 00:05:54.574 "claimed": false, 00:05:54.574 "zoned": false, 00:05:54.574 "supported_io_types": { 00:05:54.574 "read": true, 00:05:54.574 "write": true, 00:05:54.574 "unmap": true, 00:05:54.574 "flush": true, 00:05:54.574 "reset": true, 00:05:54.574 "nvme_admin": false, 00:05:54.574 "nvme_io": false, 00:05:54.574 "nvme_io_md": false, 00:05:54.574 "write_zeroes": true, 00:05:54.574 "zcopy": true, 00:05:54.574 "get_zone_info": false, 00:05:54.574 "zone_management": false, 00:05:54.574 "zone_append": false, 00:05:54.574 "compare": false, 00:05:54.574 "compare_and_write": false, 00:05:54.574 "abort": true, 00:05:54.574 "seek_hole": false, 00:05:54.574 "seek_data": false, 00:05:54.574 "copy": true, 00:05:54.574 "nvme_iov_md": false 00:05:54.574 }, 00:05:54.574 "memory_domains": [ 00:05:54.574 { 00:05:54.574 "dma_device_id": "system", 00:05:54.574 "dma_device_type": 1 00:05:54.574 }, 00:05:54.574 { 00:05:54.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.574 "dma_device_type": 2 00:05:54.574 } 00:05:54.574 ], 00:05:54.574 "driver_specific": { 00:05:54.574 "passthru": { 00:05:54.574 "name": "Passthru0", 00:05:54.574 "base_bdev_name": "Malloc0" 00:05:54.574 } 00:05:54.574 } 00:05:54.574 } 00:05:54.574 ]' 00:05:54.574 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.574 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.574 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.574 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.574 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.574 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.574 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:54.574 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.574 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.574 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.574 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:54.574 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.574 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.574 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.574 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:54.574 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:54.574 ************************************ 00:05:54.574 END TEST rpc_integrity 00:05:54.574 ************************************ 00:05:54.574 19:24:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:54.574 00:05:54.574 real 0m0.273s 00:05:54.574 user 0m0.152s 00:05:54.574 sys 0m0.031s 00:05:54.574 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.574 19:24:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 19:24:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:54.836 19:24:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.836 19:24:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.836 19:24:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 ************************************ 00:05:54.836 START TEST rpc_plugins 00:05:54.836 ************************************ 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:54.836 19:24:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.836 19:24:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:54.836 19:24:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.836 19:24:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:54.836 { 00:05:54.836 "name": "Malloc1", 00:05:54.836 "aliases": [ 00:05:54.836 "861f0b28-7c93-4ca3-bc1a-fa6fd521d045" 00:05:54.836 ], 00:05:54.836 "product_name": "Malloc disk", 00:05:54.836 "block_size": 4096, 00:05:54.836 "num_blocks": 256, 00:05:54.836 "uuid": "861f0b28-7c93-4ca3-bc1a-fa6fd521d045", 00:05:54.836 "assigned_rate_limits": { 00:05:54.836 "rw_ios_per_sec": 0, 00:05:54.836 "rw_mbytes_per_sec": 0, 00:05:54.836 "r_mbytes_per_sec": 0, 00:05:54.836 "w_mbytes_per_sec": 0 00:05:54.836 }, 00:05:54.836 "claimed": false, 00:05:54.836 "zoned": false, 00:05:54.836 "supported_io_types": { 00:05:54.836 "read": true, 00:05:54.836 "write": true, 00:05:54.836 "unmap": true, 00:05:54.836 "flush": true, 00:05:54.836 "reset": true, 00:05:54.836 "nvme_admin": false, 00:05:54.836 "nvme_io": false, 00:05:54.836 "nvme_io_md": false, 00:05:54.836 "write_zeroes": true, 00:05:54.836 "zcopy": true, 00:05:54.836 "get_zone_info": false, 00:05:54.836 "zone_management": false, 00:05:54.836 "zone_append": false, 00:05:54.836 "compare": false, 00:05:54.836 "compare_and_write": false, 00:05:54.836 "abort": true, 00:05:54.836 "seek_hole": false, 00:05:54.836 "seek_data": false, 00:05:54.836 "copy": true, 00:05:54.836 "nvme_iov_md": false 00:05:54.836 }, 00:05:54.836 "memory_domains": [ 00:05:54.836 { 00:05:54.836 "dma_device_id": "system", 00:05:54.836 "dma_device_type": 1 00:05:54.836 }, 00:05:54.836 { 00:05:54.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.836 "dma_device_type": 2 00:05:54.836 } 00:05:54.836 ], 00:05:54.836 "driver_specific": {} 00:05:54.836 } 00:05:54.836 ]' 00:05:54.836 19:24:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:54.836 19:24:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:54.836 19:24:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.836 19:24:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 19:24:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.836 19:24:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:54.836 19:24:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:54.836 ************************************ 00:05:54.836 END TEST rpc_plugins 00:05:54.836 ************************************ 00:05:54.836 19:24:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:54.836 00:05:54.836 real 0m0.126s 00:05:54.836 user 0m0.067s 00:05:54.836 sys 0m0.020s 00:05:54.836 19:24:22 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.836 19:24:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 19:24:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:54.836 19:24:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.836 19:24:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.836 19:24:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 ************************************ 00:05:54.836 START TEST rpc_trace_cmd_test 00:05:54.836 ************************************ 00:05:54.836 19:24:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:54.836 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:54.836 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:54.836 19:24:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.836 19:24:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 19:24:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.836 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:54.836 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57311", 00:05:54.836 "tpoint_group_mask": "0x8", 00:05:54.836 "iscsi_conn": { 00:05:54.836 "mask": "0x2", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "scsi": { 00:05:54.836 "mask": "0x4", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "bdev": { 00:05:54.836 "mask": "0x8", 00:05:54.836 "tpoint_mask": "0xffffffffffffffff" 00:05:54.836 }, 00:05:54.836 "nvmf_rdma": { 00:05:54.836 "mask": "0x10", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "nvmf_tcp": { 00:05:54.836 "mask": "0x20", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "ftl": { 00:05:54.836 "mask": "0x40", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "blobfs": { 00:05:54.836 "mask": "0x80", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "dsa": { 00:05:54.836 "mask": "0x200", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "thread": { 00:05:54.836 "mask": "0x400", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "nvme_pcie": { 00:05:54.836 "mask": "0x800", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "iaa": { 00:05:54.836 "mask": "0x1000", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "nvme_tcp": { 00:05:54.836 "mask": "0x2000", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "bdev_nvme": { 00:05:54.836 "mask": "0x4000", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "sock": { 00:05:54.836 "mask": "0x8000", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "blob": { 00:05:54.836 "mask": "0x10000", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "bdev_raid": { 00:05:54.836 "mask": "0x20000", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 }, 00:05:54.836 "scheduler": { 00:05:54.836 "mask": "0x40000", 00:05:54.836 "tpoint_mask": "0x0" 00:05:54.836 } 00:05:54.836 }' 00:05:54.836 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:55.157 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:55.157 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:55.157 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:55.157 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:55.157 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:55.157 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:55.157 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:55.157 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:55.157 ************************************ 00:05:55.157 END TEST rpc_trace_cmd_test 00:05:55.157 ************************************ 00:05:55.157 19:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:55.157 00:05:55.157 real 0m0.179s 00:05:55.157 user 0m0.144s 00:05:55.157 sys 0m0.025s 00:05:55.157 19:24:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.157 19:24:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:55.157 19:24:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:55.157 19:24:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:55.157 19:24:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:55.157 19:24:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.157 19:24:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.157 19:24:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.157 ************************************ 00:05:55.157 START TEST rpc_daemon_integrity 00:05:55.157 ************************************ 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.157 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:55.157 { 00:05:55.157 "name": "Malloc2", 00:05:55.157 "aliases": [ 00:05:55.157 "2f02249c-150d-4be6-a71e-22bbcb511c34" 00:05:55.157 ], 00:05:55.157 "product_name": "Malloc disk", 00:05:55.157 "block_size": 512, 00:05:55.157 "num_blocks": 16384, 00:05:55.157 "uuid": "2f02249c-150d-4be6-a71e-22bbcb511c34", 00:05:55.157 "assigned_rate_limits": { 00:05:55.157 "rw_ios_per_sec": 0, 00:05:55.157 "rw_mbytes_per_sec": 0, 00:05:55.157 "r_mbytes_per_sec": 0, 00:05:55.157 "w_mbytes_per_sec": 0 00:05:55.157 }, 00:05:55.157 "claimed": false, 00:05:55.157 "zoned": false, 00:05:55.157 "supported_io_types": { 00:05:55.157 "read": true, 00:05:55.157 "write": true, 00:05:55.157 "unmap": true, 00:05:55.157 "flush": true, 00:05:55.157 "reset": true, 00:05:55.157 "nvme_admin": false, 00:05:55.157 "nvme_io": false, 00:05:55.157 "nvme_io_md": false, 00:05:55.157 "write_zeroes": true, 00:05:55.157 "zcopy": true, 00:05:55.157 "get_zone_info": false, 00:05:55.157 "zone_management": false, 00:05:55.157 "zone_append": false, 00:05:55.157 "compare": false, 00:05:55.157 "compare_and_write": false, 00:05:55.157 "abort": true, 00:05:55.157 "seek_hole": false, 00:05:55.157 "seek_data": false, 00:05:55.157 "copy": true, 00:05:55.158 "nvme_iov_md": false 00:05:55.158 }, 00:05:55.158 "memory_domains": [ 00:05:55.158 { 00:05:55.158 "dma_device_id": "system", 00:05:55.158 "dma_device_type": 1 00:05:55.158 }, 00:05:55.158 { 00:05:55.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.158 "dma_device_type": 2 00:05:55.158 } 00:05:55.158 ], 00:05:55.158 "driver_specific": {} 00:05:55.158 } 00:05:55.158 ]' 00:05:55.158 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.420 [2024-12-05 19:24:22.418546] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:55.420 [2024-12-05 19:24:22.418606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:55.420 [2024-12-05 19:24:22.418627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:55.420 [2024-12-05 19:24:22.418638] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:55.420 [2024-12-05 19:24:22.420813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:55.420 [2024-12-05 19:24:22.420942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:55.420 Passthru0 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:55.420 { 00:05:55.420 "name": "Malloc2", 00:05:55.420 "aliases": [ 00:05:55.420 "2f02249c-150d-4be6-a71e-22bbcb511c34" 00:05:55.420 ], 00:05:55.420 "product_name": "Malloc disk", 00:05:55.420 "block_size": 512, 00:05:55.420 "num_blocks": 16384, 00:05:55.420 "uuid": "2f02249c-150d-4be6-a71e-22bbcb511c34", 00:05:55.420 "assigned_rate_limits": { 00:05:55.420 "rw_ios_per_sec": 0, 00:05:55.420 "rw_mbytes_per_sec": 0, 00:05:55.420 "r_mbytes_per_sec": 0, 00:05:55.420 "w_mbytes_per_sec": 0 00:05:55.420 }, 00:05:55.420 "claimed": true, 00:05:55.420 "claim_type": "exclusive_write", 00:05:55.420 "zoned": false, 00:05:55.420 "supported_io_types": { 00:05:55.420 "read": true, 00:05:55.420 "write": true, 00:05:55.420 "unmap": true, 00:05:55.420 "flush": true, 00:05:55.420 "reset": true, 00:05:55.420 "nvme_admin": false, 00:05:55.420 "nvme_io": false, 00:05:55.420 "nvme_io_md": false, 00:05:55.420 "write_zeroes": true, 00:05:55.420 "zcopy": true, 00:05:55.420 "get_zone_info": false, 00:05:55.420 "zone_management": false, 00:05:55.420 "zone_append": false, 00:05:55.420 "compare": false, 00:05:55.420 "compare_and_write": false, 00:05:55.420 "abort": true, 00:05:55.420 "seek_hole": false, 00:05:55.420 "seek_data": false, 00:05:55.420 "copy": true, 00:05:55.420 "nvme_iov_md": false 00:05:55.420 }, 00:05:55.420 "memory_domains": [ 00:05:55.420 { 00:05:55.420 "dma_device_id": "system", 00:05:55.420 "dma_device_type": 1 00:05:55.420 }, 00:05:55.420 { 00:05:55.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.420 "dma_device_type": 2 00:05:55.420 } 00:05:55.420 ], 00:05:55.420 "driver_specific": {} 00:05:55.420 }, 00:05:55.420 { 00:05:55.420 "name": "Passthru0", 00:05:55.420 "aliases": [ 00:05:55.420 "2fcc8cda-a8d8-5bb5-bfc2-93a458c13ad4" 00:05:55.420 ], 00:05:55.420 "product_name": "passthru", 00:05:55.420 "block_size": 512, 00:05:55.420 "num_blocks": 16384, 00:05:55.420 "uuid": "2fcc8cda-a8d8-5bb5-bfc2-93a458c13ad4", 00:05:55.420 "assigned_rate_limits": { 00:05:55.420 "rw_ios_per_sec": 0, 00:05:55.420 "rw_mbytes_per_sec": 0, 00:05:55.420 "r_mbytes_per_sec": 0, 00:05:55.420 "w_mbytes_per_sec": 0 00:05:55.420 }, 00:05:55.420 "claimed": false, 00:05:55.420 "zoned": false, 00:05:55.420 "supported_io_types": { 00:05:55.420 "read": true, 00:05:55.420 "write": true, 00:05:55.420 "unmap": true, 00:05:55.420 "flush": true, 00:05:55.420 "reset": true, 00:05:55.420 "nvme_admin": false, 00:05:55.420 "nvme_io": false, 00:05:55.420 "nvme_io_md": false, 00:05:55.420 "write_zeroes": true, 00:05:55.420 "zcopy": true, 00:05:55.420 "get_zone_info": false, 00:05:55.420 "zone_management": false, 00:05:55.420 "zone_append": false, 00:05:55.420 "compare": false, 00:05:55.420 "compare_and_write": false, 00:05:55.420 "abort": true, 00:05:55.420 "seek_hole": false, 00:05:55.420 "seek_data": false, 00:05:55.420 "copy": true, 00:05:55.420 "nvme_iov_md": false 00:05:55.420 }, 00:05:55.420 "memory_domains": [ 00:05:55.420 { 00:05:55.420 "dma_device_id": "system", 00:05:55.420 "dma_device_type": 1 00:05:55.420 }, 00:05:55.420 { 00:05:55.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.420 "dma_device_type": 2 00:05:55.420 } 00:05:55.420 ], 00:05:55.420 "driver_specific": { 00:05:55.420 "passthru": { 00:05:55.420 "name": "Passthru0", 00:05:55.420 "base_bdev_name": "Malloc2" 00:05:55.420 } 00:05:55.420 } 00:05:55.420 } 00:05:55.420 ]' 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:55.420 ************************************ 00:05:55.420 END TEST rpc_daemon_integrity 00:05:55.420 ************************************ 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:55.420 00:05:55.420 real 0m0.249s 00:05:55.420 user 0m0.123s 00:05:55.420 sys 0m0.041s 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.420 19:24:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.420 19:24:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:55.420 19:24:22 rpc -- rpc/rpc.sh@84 -- # killprocess 57311 00:05:55.420 19:24:22 rpc -- common/autotest_common.sh@954 -- # '[' -z 57311 ']' 00:05:55.420 19:24:22 rpc -- common/autotest_common.sh@958 -- # kill -0 57311 00:05:55.420 19:24:22 rpc -- common/autotest_common.sh@959 -- # uname 00:05:55.420 19:24:22 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.420 19:24:22 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57311 00:05:55.420 killing process with pid 57311 00:05:55.420 19:24:22 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.420 19:24:22 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.420 19:24:22 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57311' 00:05:55.420 19:24:22 rpc -- common/autotest_common.sh@973 -- # kill 57311 00:05:55.420 19:24:22 rpc -- common/autotest_common.sh@978 -- # wait 57311 00:05:57.338 ************************************ 00:05:57.338 END TEST rpc 00:05:57.338 ************************************ 00:05:57.338 00:05:57.338 real 0m3.706s 00:05:57.338 user 0m4.178s 00:05:57.338 sys 0m0.618s 00:05:57.338 19:24:24 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.338 19:24:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.338 19:24:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:57.338 19:24:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.338 19:24:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.338 19:24:24 -- common/autotest_common.sh@10 -- # set +x 00:05:57.338 ************************************ 00:05:57.338 START TEST skip_rpc 00:05:57.338 ************************************ 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:57.338 * Looking for test storage... 00:05:57.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.338 19:24:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.338 --rc genhtml_branch_coverage=1 00:05:57.338 --rc genhtml_function_coverage=1 00:05:57.338 --rc genhtml_legend=1 00:05:57.338 --rc geninfo_all_blocks=1 00:05:57.338 --rc geninfo_unexecuted_blocks=1 00:05:57.338 00:05:57.338 ' 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.338 --rc genhtml_branch_coverage=1 00:05:57.338 --rc genhtml_function_coverage=1 00:05:57.338 --rc genhtml_legend=1 00:05:57.338 --rc geninfo_all_blocks=1 00:05:57.338 --rc geninfo_unexecuted_blocks=1 00:05:57.338 00:05:57.338 ' 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.338 --rc genhtml_branch_coverage=1 00:05:57.338 --rc genhtml_function_coverage=1 00:05:57.338 --rc genhtml_legend=1 00:05:57.338 --rc geninfo_all_blocks=1 00:05:57.338 --rc geninfo_unexecuted_blocks=1 00:05:57.338 00:05:57.338 ' 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.338 --rc genhtml_branch_coverage=1 00:05:57.338 --rc genhtml_function_coverage=1 00:05:57.338 --rc genhtml_legend=1 00:05:57.338 --rc geninfo_all_blocks=1 00:05:57.338 --rc geninfo_unexecuted_blocks=1 00:05:57.338 00:05:57.338 ' 00:05:57.338 19:24:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:57.338 19:24:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:57.338 19:24:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.338 19:24:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.338 ************************************ 00:05:57.338 START TEST skip_rpc 00:05:57.338 ************************************ 00:05:57.338 19:24:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:57.338 19:24:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57523 00:05:57.338 19:24:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.338 19:24:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:57.338 19:24:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:57.338 [2024-12-05 19:24:24.450206] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:05:57.339 [2024-12-05 19:24:24.450325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57523 ] 00:05:57.600 [2024-12-05 19:24:24.611619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.600 [2024-12-05 19:24:24.713741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57523 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57523 ']' 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57523 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57523 00:06:02.965 killing process with pid 57523 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57523' 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57523 00:06:02.965 19:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57523 00:06:03.962 00:06:03.962 real 0m6.561s 00:06:03.962 user 0m6.177s 00:06:03.962 sys 0m0.281s 00:06:03.962 19:24:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.962 ************************************ 00:06:03.962 END TEST skip_rpc 00:06:03.962 ************************************ 00:06:03.962 19:24:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.962 19:24:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:03.962 19:24:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.962 19:24:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.962 19:24:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.962 ************************************ 00:06:03.962 START TEST skip_rpc_with_json 00:06:03.962 ************************************ 00:06:03.962 19:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:03.962 19:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:03.962 19:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57622 00:06:03.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.962 19:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.962 19:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57622 00:06:03.962 19:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57622 ']' 00:06:03.962 19:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.962 19:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.962 19:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.962 19:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.962 19:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.962 19:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.962 [2024-12-05 19:24:31.072976] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:03.962 [2024-12-05 19:24:31.073375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57622 ] 00:06:04.224 [2024-12-05 19:24:31.234589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.224 [2024-12-05 19:24:31.337509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.796 [2024-12-05 19:24:31.938749] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:04.796 request: 00:06:04.796 { 00:06:04.796 "trtype": "tcp", 00:06:04.796 "method": "nvmf_get_transports", 00:06:04.796 "req_id": 1 00:06:04.796 } 00:06:04.796 Got JSON-RPC error response 00:06:04.796 response: 00:06:04.796 { 00:06:04.796 "code": -19, 00:06:04.796 "message": "No such device" 00:06:04.796 } 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.796 [2024-12-05 19:24:31.950856] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.796 19:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:05.057 19:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.057 19:24:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:05.057 { 00:06:05.057 "subsystems": [ 00:06:05.057 { 00:06:05.057 "subsystem": "fsdev", 00:06:05.057 "config": [ 00:06:05.057 { 00:06:05.057 "method": "fsdev_set_opts", 00:06:05.057 "params": { 00:06:05.057 "fsdev_io_pool_size": 65535, 00:06:05.057 "fsdev_io_cache_size": 256 00:06:05.057 } 00:06:05.057 } 00:06:05.057 ] 00:06:05.057 }, 00:06:05.057 { 00:06:05.057 "subsystem": "keyring", 00:06:05.057 "config": [] 00:06:05.057 }, 00:06:05.057 { 00:06:05.057 "subsystem": "iobuf", 00:06:05.057 "config": [ 00:06:05.057 { 00:06:05.057 "method": "iobuf_set_options", 00:06:05.057 "params": { 00:06:05.057 "small_pool_count": 8192, 00:06:05.057 "large_pool_count": 1024, 00:06:05.057 "small_bufsize": 8192, 00:06:05.057 "large_bufsize": 135168, 00:06:05.057 "enable_numa": false 00:06:05.057 } 00:06:05.057 } 00:06:05.057 ] 00:06:05.057 }, 00:06:05.057 { 00:06:05.057 "subsystem": "sock", 00:06:05.057 "config": [ 00:06:05.057 { 00:06:05.057 "method": "sock_set_default_impl", 00:06:05.057 "params": { 00:06:05.057 "impl_name": "posix" 00:06:05.057 } 00:06:05.057 }, 00:06:05.057 { 00:06:05.057 "method": "sock_impl_set_options", 00:06:05.057 "params": { 00:06:05.057 "impl_name": "ssl", 00:06:05.057 "recv_buf_size": 4096, 00:06:05.057 "send_buf_size": 4096, 00:06:05.057 "enable_recv_pipe": true, 00:06:05.057 "enable_quickack": false, 00:06:05.057 "enable_placement_id": 0, 00:06:05.057 "enable_zerocopy_send_server": true, 00:06:05.057 "enable_zerocopy_send_client": false, 00:06:05.057 "zerocopy_threshold": 0, 00:06:05.057 "tls_version": 0, 00:06:05.057 "enable_ktls": false 00:06:05.057 } 00:06:05.057 }, 00:06:05.057 { 00:06:05.057 "method": "sock_impl_set_options", 00:06:05.057 "params": { 00:06:05.057 "impl_name": "posix", 00:06:05.057 "recv_buf_size": 2097152, 00:06:05.057 "send_buf_size": 2097152, 00:06:05.057 "enable_recv_pipe": true, 00:06:05.057 "enable_quickack": false, 00:06:05.057 "enable_placement_id": 0, 00:06:05.057 "enable_zerocopy_send_server": true, 00:06:05.057 "enable_zerocopy_send_client": false, 00:06:05.057 "zerocopy_threshold": 0, 00:06:05.057 "tls_version": 0, 00:06:05.057 "enable_ktls": false 00:06:05.057 } 00:06:05.057 } 00:06:05.057 ] 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "subsystem": "vmd", 00:06:05.058 "config": [] 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "subsystem": "accel", 00:06:05.058 "config": [ 00:06:05.058 { 00:06:05.058 "method": "accel_set_options", 00:06:05.058 "params": { 00:06:05.058 "small_cache_size": 128, 00:06:05.058 "large_cache_size": 16, 00:06:05.058 "task_count": 2048, 00:06:05.058 "sequence_count": 2048, 00:06:05.058 "buf_count": 2048 00:06:05.058 } 00:06:05.058 } 00:06:05.058 ] 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "subsystem": "bdev", 00:06:05.058 "config": [ 00:06:05.058 { 00:06:05.058 "method": "bdev_set_options", 00:06:05.058 "params": { 00:06:05.058 "bdev_io_pool_size": 65535, 00:06:05.058 "bdev_io_cache_size": 256, 00:06:05.058 "bdev_auto_examine": true, 00:06:05.058 "iobuf_small_cache_size": 128, 00:06:05.058 "iobuf_large_cache_size": 16 00:06:05.058 } 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "method": "bdev_raid_set_options", 00:06:05.058 "params": { 00:06:05.058 "process_window_size_kb": 1024, 00:06:05.058 "process_max_bandwidth_mb_sec": 0 00:06:05.058 } 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "method": "bdev_iscsi_set_options", 00:06:05.058 "params": { 00:06:05.058 "timeout_sec": 30 00:06:05.058 } 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "method": "bdev_nvme_set_options", 00:06:05.058 "params": { 00:06:05.058 "action_on_timeout": "none", 00:06:05.058 "timeout_us": 0, 00:06:05.058 "timeout_admin_us": 0, 00:06:05.058 "keep_alive_timeout_ms": 10000, 00:06:05.058 "arbitration_burst": 0, 00:06:05.058 "low_priority_weight": 0, 00:06:05.058 "medium_priority_weight": 0, 00:06:05.058 "high_priority_weight": 0, 00:06:05.058 "nvme_adminq_poll_period_us": 10000, 00:06:05.058 "nvme_ioq_poll_period_us": 0, 00:06:05.058 "io_queue_requests": 0, 00:06:05.058 "delay_cmd_submit": true, 00:06:05.058 "transport_retry_count": 4, 00:06:05.058 "bdev_retry_count": 3, 00:06:05.058 "transport_ack_timeout": 0, 00:06:05.058 "ctrlr_loss_timeout_sec": 0, 00:06:05.058 "reconnect_delay_sec": 0, 00:06:05.058 "fast_io_fail_timeout_sec": 0, 00:06:05.058 "disable_auto_failback": false, 00:06:05.058 "generate_uuids": false, 00:06:05.058 "transport_tos": 0, 00:06:05.058 "nvme_error_stat": false, 00:06:05.058 "rdma_srq_size": 0, 00:06:05.058 "io_path_stat": false, 00:06:05.058 "allow_accel_sequence": false, 00:06:05.058 "rdma_max_cq_size": 0, 00:06:05.058 "rdma_cm_event_timeout_ms": 0, 00:06:05.058 "dhchap_digests": [ 00:06:05.058 "sha256", 00:06:05.058 "sha384", 00:06:05.058 "sha512" 00:06:05.058 ], 00:06:05.058 "dhchap_dhgroups": [ 00:06:05.058 "null", 00:06:05.058 "ffdhe2048", 00:06:05.058 "ffdhe3072", 00:06:05.058 "ffdhe4096", 00:06:05.058 "ffdhe6144", 00:06:05.058 "ffdhe8192" 00:06:05.058 ] 00:06:05.058 } 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "method": "bdev_nvme_set_hotplug", 00:06:05.058 "params": { 00:06:05.058 "period_us": 100000, 00:06:05.058 "enable": false 00:06:05.058 } 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "method": "bdev_wait_for_examine" 00:06:05.058 } 00:06:05.058 ] 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "subsystem": "scsi", 00:06:05.058 "config": null 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "subsystem": "scheduler", 00:06:05.058 "config": [ 00:06:05.058 { 00:06:05.058 "method": "framework_set_scheduler", 00:06:05.058 "params": { 00:06:05.058 "name": "static" 00:06:05.058 } 00:06:05.058 } 00:06:05.058 ] 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "subsystem": "vhost_scsi", 00:06:05.058 "config": [] 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "subsystem": "vhost_blk", 00:06:05.058 "config": [] 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "subsystem": "ublk", 00:06:05.058 "config": [] 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "subsystem": "nbd", 00:06:05.058 "config": [] 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "subsystem": "nvmf", 00:06:05.058 "config": [ 00:06:05.058 { 00:06:05.058 "method": "nvmf_set_config", 00:06:05.058 "params": { 00:06:05.058 "discovery_filter": "match_any", 00:06:05.058 "admin_cmd_passthru": { 00:06:05.058 "identify_ctrlr": false 00:06:05.058 }, 00:06:05.058 "dhchap_digests": [ 00:06:05.058 "sha256", 00:06:05.058 "sha384", 00:06:05.058 "sha512" 00:06:05.058 ], 00:06:05.058 "dhchap_dhgroups": [ 00:06:05.058 "null", 00:06:05.058 "ffdhe2048", 00:06:05.058 "ffdhe3072", 00:06:05.058 "ffdhe4096", 00:06:05.058 "ffdhe6144", 00:06:05.058 "ffdhe8192" 00:06:05.058 ] 00:06:05.058 } 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "method": "nvmf_set_max_subsystems", 00:06:05.058 "params": { 00:06:05.058 "max_subsystems": 1024 00:06:05.058 } 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "method": "nvmf_set_crdt", 00:06:05.058 "params": { 00:06:05.058 "crdt1": 0, 00:06:05.058 "crdt2": 0, 00:06:05.058 "crdt3": 0 00:06:05.058 } 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "method": "nvmf_create_transport", 00:06:05.058 "params": { 00:06:05.058 "trtype": "TCP", 00:06:05.058 "max_queue_depth": 128, 00:06:05.058 "max_io_qpairs_per_ctrlr": 127, 00:06:05.058 "in_capsule_data_size": 4096, 00:06:05.058 "max_io_size": 131072, 00:06:05.058 "io_unit_size": 131072, 00:06:05.058 "max_aq_depth": 128, 00:06:05.058 "num_shared_buffers": 511, 00:06:05.058 "buf_cache_size": 4294967295, 00:06:05.058 "dif_insert_or_strip": false, 00:06:05.058 "zcopy": false, 00:06:05.058 "c2h_success": true, 00:06:05.058 "sock_priority": 0, 00:06:05.058 "abort_timeout_sec": 1, 00:06:05.058 "ack_timeout": 0, 00:06:05.058 "data_wr_pool_size": 0 00:06:05.058 } 00:06:05.058 } 00:06:05.058 ] 00:06:05.058 }, 00:06:05.058 { 00:06:05.058 "subsystem": "iscsi", 00:06:05.058 "config": [ 00:06:05.058 { 00:06:05.058 "method": "iscsi_set_options", 00:06:05.058 "params": { 00:06:05.058 "node_base": "iqn.2016-06.io.spdk", 00:06:05.058 "max_sessions": 128, 00:06:05.058 "max_connections_per_session": 2, 00:06:05.058 "max_queue_depth": 64, 00:06:05.058 "default_time2wait": 2, 00:06:05.058 "default_time2retain": 20, 00:06:05.058 "first_burst_length": 8192, 00:06:05.058 "immediate_data": true, 00:06:05.058 "allow_duplicated_isid": false, 00:06:05.058 "error_recovery_level": 0, 00:06:05.058 "nop_timeout": 60, 00:06:05.058 "nop_in_interval": 30, 00:06:05.058 "disable_chap": false, 00:06:05.058 "require_chap": false, 00:06:05.058 "mutual_chap": false, 00:06:05.058 "chap_group": 0, 00:06:05.058 "max_large_datain_per_connection": 64, 00:06:05.058 "max_r2t_per_connection": 4, 00:06:05.058 "pdu_pool_size": 36864, 00:06:05.058 "immediate_data_pool_size": 16384, 00:06:05.058 "data_out_pool_size": 2048 00:06:05.058 } 00:06:05.058 } 00:06:05.058 ] 00:06:05.058 } 00:06:05.058 ] 00:06:05.058 } 00:06:05.058 19:24:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:05.058 19:24:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57622 00:06:05.058 19:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57622 ']' 00:06:05.058 19:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57622 00:06:05.058 19:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:05.058 19:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.058 19:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57622 00:06:05.058 killing process with pid 57622 00:06:05.058 19:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.058 19:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.058 19:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57622' 00:06:05.058 19:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57622 00:06:05.058 19:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57622 00:06:06.444 19:24:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57667 00:06:06.444 19:24:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:06.444 19:24:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:11.735 19:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57667 00:06:11.735 19:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57667 ']' 00:06:11.735 19:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57667 00:06:11.735 19:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:11.735 19:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.735 19:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57667 00:06:11.735 killing process with pid 57667 00:06:11.735 19:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.735 19:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.735 19:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57667' 00:06:11.735 19:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57667 00:06:11.735 19:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57667 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:13.121 ************************************ 00:06:13.121 END TEST skip_rpc_with_json 00:06:13.121 ************************************ 00:06:13.121 00:06:13.121 real 0m9.208s 00:06:13.121 user 0m8.801s 00:06:13.121 sys 0m0.619s 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.121 19:24:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:13.121 19:24:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.121 19:24:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.121 19:24:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.121 ************************************ 00:06:13.121 START TEST skip_rpc_with_delay 00:06:13.121 ************************************ 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:13.121 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:13.121 [2024-12-05 19:24:40.341495] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:13.382 ************************************ 00:06:13.382 END TEST skip_rpc_with_delay 00:06:13.382 ************************************ 00:06:13.382 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:13.382 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.382 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.382 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.382 00:06:13.382 real 0m0.124s 00:06:13.382 user 0m0.060s 00:06:13.382 sys 0m0.062s 00:06:13.382 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.382 19:24:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:13.382 19:24:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:13.382 19:24:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:13.382 19:24:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:13.382 19:24:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.382 19:24:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.382 19:24:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.382 ************************************ 00:06:13.382 START TEST exit_on_failed_rpc_init 00:06:13.382 ************************************ 00:06:13.382 19:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:13.382 19:24:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57789 00:06:13.382 19:24:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57789 00:06:13.382 19:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57789 ']' 00:06:13.382 19:24:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.382 19:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.382 19:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.382 19:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.382 19:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.382 19:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:13.382 [2024-12-05 19:24:40.534046] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:13.382 [2024-12-05 19:24:40.534172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57789 ] 00:06:13.642 [2024-12-05 19:24:40.693965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.642 [2024-12-05 19:24:40.798601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:14.225 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:14.487 [2024-12-05 19:24:41.511319] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:14.487 [2024-12-05 19:24:41.511443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57807 ] 00:06:14.487 [2024-12-05 19:24:41.672556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.748 [2024-12-05 19:24:41.774053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.748 [2024-12-05 19:24:41.774140] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:14.748 [2024-12-05 19:24:41.774153] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:14.748 [2024-12-05 19:24:41.774167] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57789 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57789 ']' 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57789 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57789 00:06:14.748 killing process with pid 57789 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57789' 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57789 00:06:14.748 19:24:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57789 00:06:16.726 ************************************ 00:06:16.726 END TEST exit_on_failed_rpc_init 00:06:16.726 ************************************ 00:06:16.726 00:06:16.726 real 0m3.058s 00:06:16.726 user 0m3.346s 00:06:16.726 sys 0m0.446s 00:06:16.726 19:24:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.726 19:24:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:16.726 19:24:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:16.726 00:06:16.726 real 0m19.358s 00:06:16.726 user 0m18.530s 00:06:16.726 sys 0m1.586s 00:06:16.726 19:24:43 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.726 ************************************ 00:06:16.726 END TEST skip_rpc 00:06:16.726 ************************************ 00:06:16.726 19:24:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.726 19:24:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:16.726 19:24:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.726 19:24:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.726 19:24:43 -- common/autotest_common.sh@10 -- # set +x 00:06:16.726 ************************************ 00:06:16.726 START TEST rpc_client 00:06:16.726 ************************************ 00:06:16.726 19:24:43 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:16.726 * Looking for test storage... 00:06:16.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:16.726 19:24:43 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.726 19:24:43 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.726 19:24:43 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.726 19:24:43 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.726 19:24:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:16.726 19:24:43 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.726 19:24:43 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.726 --rc genhtml_branch_coverage=1 00:06:16.726 --rc genhtml_function_coverage=1 00:06:16.726 --rc genhtml_legend=1 00:06:16.726 --rc geninfo_all_blocks=1 00:06:16.726 --rc geninfo_unexecuted_blocks=1 00:06:16.726 00:06:16.726 ' 00:06:16.726 19:24:43 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.726 --rc genhtml_branch_coverage=1 00:06:16.726 --rc genhtml_function_coverage=1 00:06:16.726 --rc genhtml_legend=1 00:06:16.726 --rc geninfo_all_blocks=1 00:06:16.726 --rc geninfo_unexecuted_blocks=1 00:06:16.726 00:06:16.726 ' 00:06:16.726 19:24:43 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.726 --rc genhtml_branch_coverage=1 00:06:16.726 --rc genhtml_function_coverage=1 00:06:16.726 --rc genhtml_legend=1 00:06:16.726 --rc geninfo_all_blocks=1 00:06:16.726 --rc geninfo_unexecuted_blocks=1 00:06:16.726 00:06:16.726 ' 00:06:16.726 19:24:43 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.726 --rc genhtml_branch_coverage=1 00:06:16.726 --rc genhtml_function_coverage=1 00:06:16.726 --rc genhtml_legend=1 00:06:16.726 --rc geninfo_all_blocks=1 00:06:16.726 --rc geninfo_unexecuted_blocks=1 00:06:16.726 00:06:16.726 ' 00:06:16.726 19:24:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:16.726 OK 00:06:16.726 19:24:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:16.726 00:06:16.726 real 0m0.205s 00:06:16.726 user 0m0.112s 00:06:16.726 sys 0m0.093s 00:06:16.726 ************************************ 00:06:16.726 END TEST rpc_client 00:06:16.726 ************************************ 00:06:16.726 19:24:43 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.726 19:24:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:16.726 19:24:43 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:16.726 19:24:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.726 19:24:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.726 19:24:43 -- common/autotest_common.sh@10 -- # set +x 00:06:16.726 ************************************ 00:06:16.726 START TEST json_config 00:06:16.726 ************************************ 00:06:16.726 19:24:43 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:16.726 19:24:43 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.726 19:24:43 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.726 19:24:43 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.986 19:24:43 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.986 19:24:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.986 19:24:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.986 19:24:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.986 19:24:44 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.986 19:24:44 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.986 19:24:44 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.986 19:24:44 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.986 19:24:44 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.986 19:24:44 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.986 19:24:44 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.986 19:24:44 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.986 19:24:44 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:16.986 19:24:44 json_config -- scripts/common.sh@345 -- # : 1 00:06:16.986 19:24:44 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.986 19:24:44 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.986 19:24:44 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:16.986 19:24:44 json_config -- scripts/common.sh@353 -- # local d=1 00:06:16.986 19:24:44 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.986 19:24:44 json_config -- scripts/common.sh@355 -- # echo 1 00:06:16.986 19:24:44 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.986 19:24:44 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:16.986 19:24:44 json_config -- scripts/common.sh@353 -- # local d=2 00:06:16.986 19:24:44 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.986 19:24:44 json_config -- scripts/common.sh@355 -- # echo 2 00:06:16.986 19:24:44 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.986 19:24:44 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.986 19:24:44 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.986 19:24:44 json_config -- scripts/common.sh@368 -- # return 0 00:06:16.986 19:24:44 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.986 19:24:44 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.986 --rc genhtml_branch_coverage=1 00:06:16.986 --rc genhtml_function_coverage=1 00:06:16.986 --rc genhtml_legend=1 00:06:16.986 --rc geninfo_all_blocks=1 00:06:16.986 --rc geninfo_unexecuted_blocks=1 00:06:16.986 00:06:16.986 ' 00:06:16.986 19:24:44 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.986 --rc genhtml_branch_coverage=1 00:06:16.986 --rc genhtml_function_coverage=1 00:06:16.986 --rc genhtml_legend=1 00:06:16.986 --rc geninfo_all_blocks=1 00:06:16.986 --rc geninfo_unexecuted_blocks=1 00:06:16.986 00:06:16.986 ' 00:06:16.986 19:24:44 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.986 --rc genhtml_branch_coverage=1 00:06:16.986 --rc genhtml_function_coverage=1 00:06:16.986 --rc genhtml_legend=1 00:06:16.986 --rc geninfo_all_blocks=1 00:06:16.986 --rc geninfo_unexecuted_blocks=1 00:06:16.986 00:06:16.986 ' 00:06:16.986 19:24:44 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.986 --rc genhtml_branch_coverage=1 00:06:16.986 --rc genhtml_function_coverage=1 00:06:16.986 --rc genhtml_legend=1 00:06:16.986 --rc geninfo_all_blocks=1 00:06:16.986 --rc geninfo_unexecuted_blocks=1 00:06:16.986 00:06:16.986 ' 00:06:16.986 19:24:44 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:16.986 19:24:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:16.986 19:24:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:672327fd-94cc-407c-a6be-ea572201c4d7 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=672327fd-94cc-407c-a6be-ea572201c4d7 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:16.987 19:24:44 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.987 19:24:44 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.987 19:24:44 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.987 19:24:44 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.987 19:24:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.987 19:24:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.987 19:24:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.987 19:24:44 json_config -- paths/export.sh@5 -- # export PATH 00:06:16.987 19:24:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@51 -- # : 0 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.987 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.987 19:24:44 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.987 19:24:44 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:16.987 19:24:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:16.987 19:24:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:16.987 19:24:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:16.987 19:24:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:16.987 19:24:44 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:16.987 WARNING: No tests are enabled so not running JSON configuration tests 00:06:16.987 19:24:44 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:16.987 00:06:16.987 real 0m0.154s 00:06:16.987 user 0m0.092s 00:06:16.987 sys 0m0.061s 00:06:16.987 19:24:44 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.987 19:24:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.987 ************************************ 00:06:16.987 END TEST json_config 00:06:16.987 ************************************ 00:06:16.987 19:24:44 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:16.987 19:24:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.987 19:24:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.987 19:24:44 -- common/autotest_common.sh@10 -- # set +x 00:06:16.987 ************************************ 00:06:16.987 START TEST json_config_extra_key 00:06:16.987 ************************************ 00:06:16.987 19:24:44 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:16.987 19:24:44 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.987 19:24:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.987 19:24:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.987 19:24:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.987 19:24:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:16.987 19:24:44 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.987 19:24:44 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.987 --rc genhtml_branch_coverage=1 00:06:16.987 --rc genhtml_function_coverage=1 00:06:16.987 --rc genhtml_legend=1 00:06:16.987 --rc geninfo_all_blocks=1 00:06:16.987 --rc geninfo_unexecuted_blocks=1 00:06:16.987 00:06:16.987 ' 00:06:16.987 19:24:44 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.987 --rc genhtml_branch_coverage=1 00:06:16.987 --rc genhtml_function_coverage=1 00:06:16.987 --rc genhtml_legend=1 00:06:16.987 --rc geninfo_all_blocks=1 00:06:16.987 --rc geninfo_unexecuted_blocks=1 00:06:16.987 00:06:16.987 ' 00:06:16.987 19:24:44 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.987 --rc genhtml_branch_coverage=1 00:06:16.987 --rc genhtml_function_coverage=1 00:06:16.987 --rc genhtml_legend=1 00:06:16.987 --rc geninfo_all_blocks=1 00:06:16.987 --rc geninfo_unexecuted_blocks=1 00:06:16.987 00:06:16.987 ' 00:06:16.987 19:24:44 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.987 --rc genhtml_branch_coverage=1 00:06:16.987 --rc genhtml_function_coverage=1 00:06:16.987 --rc genhtml_legend=1 00:06:16.987 --rc geninfo_all_blocks=1 00:06:16.987 --rc geninfo_unexecuted_blocks=1 00:06:16.987 00:06:16.987 ' 00:06:16.987 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:16.987 19:24:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:672327fd-94cc-407c-a6be-ea572201c4d7 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=672327fd-94cc-407c-a6be-ea572201c4d7 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.988 19:24:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:16.988 19:24:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:17.249 19:24:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.249 19:24:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.249 19:24:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.249 19:24:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.249 19:24:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.249 19:24:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.250 19:24:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:17.250 19:24:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.250 19:24:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:17.250 19:24:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:17.250 19:24:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:17.250 19:24:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.250 19:24:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.250 19:24:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.250 19:24:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:17.250 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:17.250 19:24:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:17.250 19:24:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:17.250 19:24:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:17.250 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:17.250 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:17.250 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:17.250 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:17.250 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:17.250 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:17.250 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:17.250 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:17.250 INFO: launching applications... 00:06:17.250 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:17.250 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:17.250 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:17.250 19:24:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:17.250 19:24:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:17.250 19:24:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:17.250 19:24:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:17.250 19:24:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:17.250 19:24:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:17.250 19:24:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.250 19:24:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.250 19:24:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58001 00:06:17.250 19:24:44 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:17.250 Waiting for target to run... 00:06:17.250 19:24:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:17.250 19:24:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58001 /var/tmp/spdk_tgt.sock 00:06:17.250 19:24:44 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58001 ']' 00:06:17.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:17.250 19:24:44 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:17.250 19:24:44 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.250 19:24:44 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:17.250 19:24:44 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.250 19:24:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:17.250 [2024-12-05 19:24:44.354463] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:17.250 [2024-12-05 19:24:44.354649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58001 ] 00:06:17.512 [2024-12-05 19:24:44.694908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.772 [2024-12-05 19:24:44.786125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.343 00:06:18.343 INFO: shutting down applications... 00:06:18.343 19:24:45 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.343 19:24:45 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:18.343 19:24:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:18.343 19:24:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:18.343 19:24:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:18.343 19:24:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:18.343 19:24:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:18.343 19:24:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58001 ]] 00:06:18.343 19:24:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58001 00:06:18.343 19:24:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:18.343 19:24:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.343 19:24:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58001 00:06:18.343 19:24:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:18.602 19:24:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:18.602 19:24:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.602 19:24:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58001 00:06:18.602 19:24:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.172 19:24:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.172 19:24:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.172 19:24:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58001 00:06:19.172 19:24:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.772 19:24:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.772 19:24:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.772 19:24:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58001 00:06:19.772 19:24:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.344 19:24:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.344 19:24:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.344 19:24:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58001 00:06:20.344 SPDK target shutdown done 00:06:20.344 19:24:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:20.344 19:24:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:20.344 19:24:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:20.345 19:24:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:20.345 Success 00:06:20.345 19:24:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:20.345 ************************************ 00:06:20.345 END TEST json_config_extra_key 00:06:20.345 ************************************ 00:06:20.345 00:06:20.345 real 0m3.218s 00:06:20.345 user 0m2.841s 00:06:20.345 sys 0m0.430s 00:06:20.345 19:24:47 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.345 19:24:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:20.345 19:24:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:20.345 19:24:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.345 19:24:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.345 19:24:47 -- common/autotest_common.sh@10 -- # set +x 00:06:20.345 ************************************ 00:06:20.345 START TEST alias_rpc 00:06:20.345 ************************************ 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:20.345 * Looking for test storage... 00:06:20.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.345 19:24:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.345 --rc genhtml_branch_coverage=1 00:06:20.345 --rc genhtml_function_coverage=1 00:06:20.345 --rc genhtml_legend=1 00:06:20.345 --rc geninfo_all_blocks=1 00:06:20.345 --rc geninfo_unexecuted_blocks=1 00:06:20.345 00:06:20.345 ' 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.345 --rc genhtml_branch_coverage=1 00:06:20.345 --rc genhtml_function_coverage=1 00:06:20.345 --rc genhtml_legend=1 00:06:20.345 --rc geninfo_all_blocks=1 00:06:20.345 --rc geninfo_unexecuted_blocks=1 00:06:20.345 00:06:20.345 ' 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.345 --rc genhtml_branch_coverage=1 00:06:20.345 --rc genhtml_function_coverage=1 00:06:20.345 --rc genhtml_legend=1 00:06:20.345 --rc geninfo_all_blocks=1 00:06:20.345 --rc geninfo_unexecuted_blocks=1 00:06:20.345 00:06:20.345 ' 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.345 --rc genhtml_branch_coverage=1 00:06:20.345 --rc genhtml_function_coverage=1 00:06:20.345 --rc genhtml_legend=1 00:06:20.345 --rc geninfo_all_blocks=1 00:06:20.345 --rc geninfo_unexecuted_blocks=1 00:06:20.345 00:06:20.345 ' 00:06:20.345 19:24:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:20.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.345 19:24:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58094 00:06:20.345 19:24:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58094 00:06:20.345 19:24:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58094 ']' 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.345 19:24:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.606 [2024-12-05 19:24:47.612544] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:20.606 [2024-12-05 19:24:47.612815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58094 ] 00:06:20.606 [2024-12-05 19:24:47.775966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.868 [2024-12-05 19:24:47.878590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.442 19:24:48 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.442 19:24:48 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:21.442 19:24:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:21.704 19:24:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58094 00:06:21.704 19:24:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58094 ']' 00:06:21.704 19:24:48 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58094 00:06:21.704 19:24:48 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:21.704 19:24:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.704 19:24:48 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58094 00:06:21.704 killing process with pid 58094 00:06:21.704 19:24:48 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.704 19:24:48 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.704 19:24:48 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58094' 00:06:21.704 19:24:48 alias_rpc -- common/autotest_common.sh@973 -- # kill 58094 00:06:21.704 19:24:48 alias_rpc -- common/autotest_common.sh@978 -- # wait 58094 00:06:23.091 ************************************ 00:06:23.091 END TEST alias_rpc 00:06:23.091 ************************************ 00:06:23.091 00:06:23.091 real 0m2.893s 00:06:23.091 user 0m2.979s 00:06:23.091 sys 0m0.422s 00:06:23.091 19:24:50 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.091 19:24:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.091 19:24:50 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:23.091 19:24:50 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:23.091 19:24:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.091 19:24:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.091 19:24:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.091 ************************************ 00:06:23.091 START TEST spdkcli_tcp 00:06:23.091 ************************************ 00:06:23.091 19:24:50 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:23.351 * Looking for test storage... 00:06:23.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:23.351 19:24:50 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.351 19:24:50 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.351 19:24:50 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.351 19:24:50 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:23.351 19:24:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.352 19:24:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.352 19:24:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.352 19:24:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.352 --rc genhtml_branch_coverage=1 00:06:23.352 --rc genhtml_function_coverage=1 00:06:23.352 --rc genhtml_legend=1 00:06:23.352 --rc geninfo_all_blocks=1 00:06:23.352 --rc geninfo_unexecuted_blocks=1 00:06:23.352 00:06:23.352 ' 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.352 --rc genhtml_branch_coverage=1 00:06:23.352 --rc genhtml_function_coverage=1 00:06:23.352 --rc genhtml_legend=1 00:06:23.352 --rc geninfo_all_blocks=1 00:06:23.352 --rc geninfo_unexecuted_blocks=1 00:06:23.352 00:06:23.352 ' 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.352 --rc genhtml_branch_coverage=1 00:06:23.352 --rc genhtml_function_coverage=1 00:06:23.352 --rc genhtml_legend=1 00:06:23.352 --rc geninfo_all_blocks=1 00:06:23.352 --rc geninfo_unexecuted_blocks=1 00:06:23.352 00:06:23.352 ' 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.352 --rc genhtml_branch_coverage=1 00:06:23.352 --rc genhtml_function_coverage=1 00:06:23.352 --rc genhtml_legend=1 00:06:23.352 --rc geninfo_all_blocks=1 00:06:23.352 --rc geninfo_unexecuted_blocks=1 00:06:23.352 00:06:23.352 ' 00:06:23.352 19:24:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:23.352 19:24:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:23.352 19:24:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:23.352 19:24:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:23.352 19:24:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:23.352 19:24:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:23.352 19:24:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.352 19:24:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58190 00:06:23.352 19:24:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58190 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58190 ']' 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.352 19:24:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:23.352 19:24:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.352 [2024-12-05 19:24:50.562146] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:23.352 [2024-12-05 19:24:50.562264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58190 ] 00:06:23.613 [2024-12-05 19:24:50.725652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.613 [2024-12-05 19:24:50.831817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.613 [2024-12-05 19:24:50.831822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.185 19:24:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.185 19:24:51 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:24.185 19:24:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58207 00:06:24.185 19:24:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:24.185 19:24:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:24.448 [ 00:06:24.448 "bdev_malloc_delete", 00:06:24.448 "bdev_malloc_create", 00:06:24.448 "bdev_null_resize", 00:06:24.448 "bdev_null_delete", 00:06:24.448 "bdev_null_create", 00:06:24.448 "bdev_nvme_cuse_unregister", 00:06:24.448 "bdev_nvme_cuse_register", 00:06:24.448 "bdev_opal_new_user", 00:06:24.448 "bdev_opal_set_lock_state", 00:06:24.448 "bdev_opal_delete", 00:06:24.448 "bdev_opal_get_info", 00:06:24.448 "bdev_opal_create", 00:06:24.448 "bdev_nvme_opal_revert", 00:06:24.448 "bdev_nvme_opal_init", 00:06:24.448 "bdev_nvme_send_cmd", 00:06:24.448 "bdev_nvme_set_keys", 00:06:24.448 "bdev_nvme_get_path_iostat", 00:06:24.448 "bdev_nvme_get_mdns_discovery_info", 00:06:24.448 "bdev_nvme_stop_mdns_discovery", 00:06:24.448 "bdev_nvme_start_mdns_discovery", 00:06:24.448 "bdev_nvme_set_multipath_policy", 00:06:24.448 "bdev_nvme_set_preferred_path", 00:06:24.448 "bdev_nvme_get_io_paths", 00:06:24.448 "bdev_nvme_remove_error_injection", 00:06:24.448 "bdev_nvme_add_error_injection", 00:06:24.448 "bdev_nvme_get_discovery_info", 00:06:24.448 "bdev_nvme_stop_discovery", 00:06:24.448 "bdev_nvme_start_discovery", 00:06:24.448 "bdev_nvme_get_controller_health_info", 00:06:24.448 "bdev_nvme_disable_controller", 00:06:24.448 "bdev_nvme_enable_controller", 00:06:24.448 "bdev_nvme_reset_controller", 00:06:24.448 "bdev_nvme_get_transport_statistics", 00:06:24.448 "bdev_nvme_apply_firmware", 00:06:24.448 "bdev_nvme_detach_controller", 00:06:24.448 "bdev_nvme_get_controllers", 00:06:24.448 "bdev_nvme_attach_controller", 00:06:24.448 "bdev_nvme_set_hotplug", 00:06:24.448 "bdev_nvme_set_options", 00:06:24.448 "bdev_passthru_delete", 00:06:24.448 "bdev_passthru_create", 00:06:24.448 "bdev_lvol_set_parent_bdev", 00:06:24.448 "bdev_lvol_set_parent", 00:06:24.448 "bdev_lvol_check_shallow_copy", 00:06:24.448 "bdev_lvol_start_shallow_copy", 00:06:24.448 "bdev_lvol_grow_lvstore", 00:06:24.448 "bdev_lvol_get_lvols", 00:06:24.448 "bdev_lvol_get_lvstores", 00:06:24.448 "bdev_lvol_delete", 00:06:24.448 "bdev_lvol_set_read_only", 00:06:24.448 "bdev_lvol_resize", 00:06:24.448 "bdev_lvol_decouple_parent", 00:06:24.448 "bdev_lvol_inflate", 00:06:24.448 "bdev_lvol_rename", 00:06:24.448 "bdev_lvol_clone_bdev", 00:06:24.448 "bdev_lvol_clone", 00:06:24.448 "bdev_lvol_snapshot", 00:06:24.448 "bdev_lvol_create", 00:06:24.448 "bdev_lvol_delete_lvstore", 00:06:24.448 "bdev_lvol_rename_lvstore", 00:06:24.448 "bdev_lvol_create_lvstore", 00:06:24.448 "bdev_raid_set_options", 00:06:24.448 "bdev_raid_remove_base_bdev", 00:06:24.448 "bdev_raid_add_base_bdev", 00:06:24.448 "bdev_raid_delete", 00:06:24.448 "bdev_raid_create", 00:06:24.448 "bdev_raid_get_bdevs", 00:06:24.448 "bdev_error_inject_error", 00:06:24.448 "bdev_error_delete", 00:06:24.448 "bdev_error_create", 00:06:24.448 "bdev_split_delete", 00:06:24.448 "bdev_split_create", 00:06:24.448 "bdev_delay_delete", 00:06:24.448 "bdev_delay_create", 00:06:24.448 "bdev_delay_update_latency", 00:06:24.448 "bdev_zone_block_delete", 00:06:24.448 "bdev_zone_block_create", 00:06:24.448 "blobfs_create", 00:06:24.448 "blobfs_detect", 00:06:24.448 "blobfs_set_cache_size", 00:06:24.448 "bdev_xnvme_delete", 00:06:24.448 "bdev_xnvme_create", 00:06:24.448 "bdev_aio_delete", 00:06:24.448 "bdev_aio_rescan", 00:06:24.448 "bdev_aio_create", 00:06:24.448 "bdev_ftl_set_property", 00:06:24.448 "bdev_ftl_get_properties", 00:06:24.448 "bdev_ftl_get_stats", 00:06:24.448 "bdev_ftl_unmap", 00:06:24.448 "bdev_ftl_unload", 00:06:24.448 "bdev_ftl_delete", 00:06:24.448 "bdev_ftl_load", 00:06:24.448 "bdev_ftl_create", 00:06:24.448 "bdev_virtio_attach_controller", 00:06:24.448 "bdev_virtio_scsi_get_devices", 00:06:24.448 "bdev_virtio_detach_controller", 00:06:24.448 "bdev_virtio_blk_set_hotplug", 00:06:24.448 "bdev_iscsi_delete", 00:06:24.448 "bdev_iscsi_create", 00:06:24.448 "bdev_iscsi_set_options", 00:06:24.448 "accel_error_inject_error", 00:06:24.448 "ioat_scan_accel_module", 00:06:24.448 "dsa_scan_accel_module", 00:06:24.448 "iaa_scan_accel_module", 00:06:24.448 "keyring_file_remove_key", 00:06:24.448 "keyring_file_add_key", 00:06:24.448 "keyring_linux_set_options", 00:06:24.448 "fsdev_aio_delete", 00:06:24.448 "fsdev_aio_create", 00:06:24.448 "iscsi_get_histogram", 00:06:24.448 "iscsi_enable_histogram", 00:06:24.448 "iscsi_set_options", 00:06:24.448 "iscsi_get_auth_groups", 00:06:24.448 "iscsi_auth_group_remove_secret", 00:06:24.448 "iscsi_auth_group_add_secret", 00:06:24.448 "iscsi_delete_auth_group", 00:06:24.448 "iscsi_create_auth_group", 00:06:24.448 "iscsi_set_discovery_auth", 00:06:24.448 "iscsi_get_options", 00:06:24.448 "iscsi_target_node_request_logout", 00:06:24.448 "iscsi_target_node_set_redirect", 00:06:24.448 "iscsi_target_node_set_auth", 00:06:24.448 "iscsi_target_node_add_lun", 00:06:24.448 "iscsi_get_stats", 00:06:24.448 "iscsi_get_connections", 00:06:24.448 "iscsi_portal_group_set_auth", 00:06:24.448 "iscsi_start_portal_group", 00:06:24.448 "iscsi_delete_portal_group", 00:06:24.448 "iscsi_create_portal_group", 00:06:24.448 "iscsi_get_portal_groups", 00:06:24.448 "iscsi_delete_target_node", 00:06:24.448 "iscsi_target_node_remove_pg_ig_maps", 00:06:24.448 "iscsi_target_node_add_pg_ig_maps", 00:06:24.448 "iscsi_create_target_node", 00:06:24.448 "iscsi_get_target_nodes", 00:06:24.448 "iscsi_delete_initiator_group", 00:06:24.448 "iscsi_initiator_group_remove_initiators", 00:06:24.448 "iscsi_initiator_group_add_initiators", 00:06:24.448 "iscsi_create_initiator_group", 00:06:24.448 "iscsi_get_initiator_groups", 00:06:24.448 "nvmf_set_crdt", 00:06:24.448 "nvmf_set_config", 00:06:24.448 "nvmf_set_max_subsystems", 00:06:24.448 "nvmf_stop_mdns_prr", 00:06:24.448 "nvmf_publish_mdns_prr", 00:06:24.448 "nvmf_subsystem_get_listeners", 00:06:24.448 "nvmf_subsystem_get_qpairs", 00:06:24.448 "nvmf_subsystem_get_controllers", 00:06:24.448 "nvmf_get_stats", 00:06:24.448 "nvmf_get_transports", 00:06:24.448 "nvmf_create_transport", 00:06:24.448 "nvmf_get_targets", 00:06:24.448 "nvmf_delete_target", 00:06:24.448 "nvmf_create_target", 00:06:24.448 "nvmf_subsystem_allow_any_host", 00:06:24.448 "nvmf_subsystem_set_keys", 00:06:24.448 "nvmf_subsystem_remove_host", 00:06:24.448 "nvmf_subsystem_add_host", 00:06:24.448 "nvmf_ns_remove_host", 00:06:24.448 "nvmf_ns_add_host", 00:06:24.448 "nvmf_subsystem_remove_ns", 00:06:24.448 "nvmf_subsystem_set_ns_ana_group", 00:06:24.448 "nvmf_subsystem_add_ns", 00:06:24.448 "nvmf_subsystem_listener_set_ana_state", 00:06:24.448 "nvmf_discovery_get_referrals", 00:06:24.449 "nvmf_discovery_remove_referral", 00:06:24.449 "nvmf_discovery_add_referral", 00:06:24.449 "nvmf_subsystem_remove_listener", 00:06:24.449 "nvmf_subsystem_add_listener", 00:06:24.449 "nvmf_delete_subsystem", 00:06:24.449 "nvmf_create_subsystem", 00:06:24.449 "nvmf_get_subsystems", 00:06:24.449 "env_dpdk_get_mem_stats", 00:06:24.449 "nbd_get_disks", 00:06:24.449 "nbd_stop_disk", 00:06:24.449 "nbd_start_disk", 00:06:24.449 "ublk_recover_disk", 00:06:24.449 "ublk_get_disks", 00:06:24.449 "ublk_stop_disk", 00:06:24.449 "ublk_start_disk", 00:06:24.449 "ublk_destroy_target", 00:06:24.449 "ublk_create_target", 00:06:24.449 "virtio_blk_create_transport", 00:06:24.449 "virtio_blk_get_transports", 00:06:24.449 "vhost_controller_set_coalescing", 00:06:24.449 "vhost_get_controllers", 00:06:24.449 "vhost_delete_controller", 00:06:24.449 "vhost_create_blk_controller", 00:06:24.449 "vhost_scsi_controller_remove_target", 00:06:24.449 "vhost_scsi_controller_add_target", 00:06:24.449 "vhost_start_scsi_controller", 00:06:24.449 "vhost_create_scsi_controller", 00:06:24.449 "thread_set_cpumask", 00:06:24.449 "scheduler_set_options", 00:06:24.449 "framework_get_governor", 00:06:24.449 "framework_get_scheduler", 00:06:24.449 "framework_set_scheduler", 00:06:24.449 "framework_get_reactors", 00:06:24.449 "thread_get_io_channels", 00:06:24.449 "thread_get_pollers", 00:06:24.449 "thread_get_stats", 00:06:24.449 "framework_monitor_context_switch", 00:06:24.449 "spdk_kill_instance", 00:06:24.449 "log_enable_timestamps", 00:06:24.449 "log_get_flags", 00:06:24.449 "log_clear_flag", 00:06:24.449 "log_set_flag", 00:06:24.449 "log_get_level", 00:06:24.449 "log_set_level", 00:06:24.449 "log_get_print_level", 00:06:24.449 "log_set_print_level", 00:06:24.449 "framework_enable_cpumask_locks", 00:06:24.449 "framework_disable_cpumask_locks", 00:06:24.449 "framework_wait_init", 00:06:24.449 "framework_start_init", 00:06:24.449 "scsi_get_devices", 00:06:24.449 "bdev_get_histogram", 00:06:24.449 "bdev_enable_histogram", 00:06:24.449 "bdev_set_qos_limit", 00:06:24.449 "bdev_set_qd_sampling_period", 00:06:24.449 "bdev_get_bdevs", 00:06:24.449 "bdev_reset_iostat", 00:06:24.449 "bdev_get_iostat", 00:06:24.449 "bdev_examine", 00:06:24.449 "bdev_wait_for_examine", 00:06:24.449 "bdev_set_options", 00:06:24.449 "accel_get_stats", 00:06:24.449 "accel_set_options", 00:06:24.449 "accel_set_driver", 00:06:24.449 "accel_crypto_key_destroy", 00:06:24.449 "accel_crypto_keys_get", 00:06:24.449 "accel_crypto_key_create", 00:06:24.449 "accel_assign_opc", 00:06:24.449 "accel_get_module_info", 00:06:24.449 "accel_get_opc_assignments", 00:06:24.449 "vmd_rescan", 00:06:24.449 "vmd_remove_device", 00:06:24.449 "vmd_enable", 00:06:24.449 "sock_get_default_impl", 00:06:24.449 "sock_set_default_impl", 00:06:24.449 "sock_impl_set_options", 00:06:24.449 "sock_impl_get_options", 00:06:24.449 "iobuf_get_stats", 00:06:24.449 "iobuf_set_options", 00:06:24.449 "keyring_get_keys", 00:06:24.449 "framework_get_pci_devices", 00:06:24.449 "framework_get_config", 00:06:24.449 "framework_get_subsystems", 00:06:24.449 "fsdev_set_opts", 00:06:24.449 "fsdev_get_opts", 00:06:24.449 "trace_get_info", 00:06:24.449 "trace_get_tpoint_group_mask", 00:06:24.449 "trace_disable_tpoint_group", 00:06:24.449 "trace_enable_tpoint_group", 00:06:24.449 "trace_clear_tpoint_mask", 00:06:24.449 "trace_set_tpoint_mask", 00:06:24.449 "notify_get_notifications", 00:06:24.449 "notify_get_types", 00:06:24.449 "spdk_get_version", 00:06:24.449 "rpc_get_methods" 00:06:24.449 ] 00:06:24.449 19:24:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:24.449 19:24:51 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.449 19:24:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.449 19:24:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:24.449 19:24:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58190 00:06:24.449 19:24:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58190 ']' 00:06:24.449 19:24:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58190 00:06:24.449 19:24:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:24.449 19:24:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.449 19:24:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58190 00:06:24.711 killing process with pid 58190 00:06:24.711 19:24:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.711 19:24:51 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.711 19:24:51 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58190' 00:06:24.711 19:24:51 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58190 00:06:24.711 19:24:51 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58190 00:06:26.096 ************************************ 00:06:26.096 END TEST spdkcli_tcp 00:06:26.096 ************************************ 00:06:26.096 00:06:26.096 real 0m2.913s 00:06:26.096 user 0m5.232s 00:06:26.096 sys 0m0.424s 00:06:26.096 19:24:53 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.096 19:24:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.096 19:24:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:26.096 19:24:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.096 19:24:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.096 19:24:53 -- common/autotest_common.sh@10 -- # set +x 00:06:26.096 ************************************ 00:06:26.096 START TEST dpdk_mem_utility 00:06:26.096 ************************************ 00:06:26.096 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:26.358 * Looking for test storage... 00:06:26.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:26.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.358 19:24:53 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.358 --rc genhtml_branch_coverage=1 00:06:26.358 --rc genhtml_function_coverage=1 00:06:26.358 --rc genhtml_legend=1 00:06:26.358 --rc geninfo_all_blocks=1 00:06:26.358 --rc geninfo_unexecuted_blocks=1 00:06:26.358 00:06:26.358 ' 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.358 --rc genhtml_branch_coverage=1 00:06:26.358 --rc genhtml_function_coverage=1 00:06:26.358 --rc genhtml_legend=1 00:06:26.358 --rc geninfo_all_blocks=1 00:06:26.358 --rc geninfo_unexecuted_blocks=1 00:06:26.358 00:06:26.358 ' 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.358 --rc genhtml_branch_coverage=1 00:06:26.358 --rc genhtml_function_coverage=1 00:06:26.358 --rc genhtml_legend=1 00:06:26.358 --rc geninfo_all_blocks=1 00:06:26.358 --rc geninfo_unexecuted_blocks=1 00:06:26.358 00:06:26.358 ' 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.358 --rc genhtml_branch_coverage=1 00:06:26.358 --rc genhtml_function_coverage=1 00:06:26.358 --rc genhtml_legend=1 00:06:26.358 --rc geninfo_all_blocks=1 00:06:26.358 --rc geninfo_unexecuted_blocks=1 00:06:26.358 00:06:26.358 ' 00:06:26.358 19:24:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:26.358 19:24:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58301 00:06:26.358 19:24:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58301 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58301 ']' 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.358 19:24:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:26.358 19:24:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.358 [2024-12-05 19:24:53.544040] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:26.358 [2024-12-05 19:24:53.544879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58301 ] 00:06:26.619 [2024-12-05 19:24:53.704253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.619 [2024-12-05 19:24:53.836833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.564 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.564 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:27.564 19:24:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:27.564 19:24:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:27.564 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.564 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:27.564 { 00:06:27.564 "filename": "/tmp/spdk_mem_dump.txt" 00:06:27.564 } 00:06:27.564 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.564 19:24:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:27.564 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:27.564 1 heaps totaling size 824.000000 MiB 00:06:27.564 size: 824.000000 MiB heap id: 0 00:06:27.564 end heaps---------- 00:06:27.564 9 mempools totaling size 603.782043 MiB 00:06:27.564 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:27.564 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:27.564 size: 100.555481 MiB name: bdev_io_58301 00:06:27.564 size: 50.003479 MiB name: msgpool_58301 00:06:27.564 size: 36.509338 MiB name: fsdev_io_58301 00:06:27.564 size: 21.763794 MiB name: PDU_Pool 00:06:27.564 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:27.564 size: 4.133484 MiB name: evtpool_58301 00:06:27.564 size: 0.026123 MiB name: Session_Pool 00:06:27.564 end mempools------- 00:06:27.564 6 memzones totaling size 4.142822 MiB 00:06:27.564 size: 1.000366 MiB name: RG_ring_0_58301 00:06:27.564 size: 1.000366 MiB name: RG_ring_1_58301 00:06:27.564 size: 1.000366 MiB name: RG_ring_4_58301 00:06:27.564 size: 1.000366 MiB name: RG_ring_5_58301 00:06:27.564 size: 0.125366 MiB name: RG_ring_2_58301 00:06:27.564 size: 0.015991 MiB name: RG_ring_3_58301 00:06:27.564 end memzones------- 00:06:27.564 19:24:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:27.564 heap id: 0 total size: 824.000000 MiB number of busy elements: 322 number of free elements: 18 00:06:27.564 list of free elements. size: 16.779663 MiB 00:06:27.564 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:27.564 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:27.564 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:27.564 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:27.564 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:27.564 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:27.564 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:27.564 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:27.564 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:27.564 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:27.564 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:27.564 element at address: 0x20001b400000 with size: 0.559753 MiB 00:06:27.564 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:27.564 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:27.564 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:27.564 element at address: 0x200012c00000 with size: 0.433228 MiB 00:06:27.564 element at address: 0x200028800000 with size: 0.391907 MiB 00:06:27.564 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:27.564 list of standard malloc elements. size: 199.289429 MiB 00:06:27.564 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:27.564 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:27.564 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:27.564 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:27.564 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:27.564 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:27.564 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:27.564 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:27.564 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:27.564 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:27.564 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:27.564 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:27.564 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:27.564 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:27.564 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:27.564 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:27.564 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:27.564 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:27.565 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:27.565 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:27.566 element at address: 0x200028864540 with size: 0.000244 MiB 00:06:27.566 element at address: 0x200028864640 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886b300 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:27.566 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:27.566 list of memzone associated elements. size: 607.930908 MiB 00:06:27.566 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:27.566 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:27.566 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:27.566 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:27.566 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:27.567 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58301_0 00:06:27.567 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:27.567 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58301_0 00:06:27.567 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:27.567 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58301_0 00:06:27.567 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:27.567 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:27.567 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:27.567 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:27.567 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:27.567 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58301_0 00:06:27.567 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:27.567 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58301 00:06:27.567 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:27.567 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58301 00:06:27.567 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:27.567 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:27.567 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:27.567 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:27.567 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:27.567 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:27.567 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:27.567 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:27.567 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:27.567 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58301 00:06:27.567 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:27.567 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58301 00:06:27.567 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:27.567 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58301 00:06:27.567 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:27.567 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58301 00:06:27.567 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:27.567 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58301 00:06:27.567 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:27.567 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58301 00:06:27.567 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:27.567 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:27.567 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:27.567 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:27.567 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:27.567 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:27.567 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:27.567 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58301 00:06:27.567 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:27.567 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58301 00:06:27.567 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:27.567 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:27.567 element at address: 0x200028864740 with size: 0.023804 MiB 00:06:27.567 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:27.567 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:27.567 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58301 00:06:27.567 element at address: 0x20002886a8c0 with size: 0.002502 MiB 00:06:27.567 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:27.567 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:27.567 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58301 00:06:27.567 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:27.567 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58301 00:06:27.567 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:27.567 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58301 00:06:27.567 element at address: 0x20002886b400 with size: 0.000366 MiB 00:06:27.567 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:27.567 19:24:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:27.567 19:24:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58301 00:06:27.567 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58301 ']' 00:06:27.567 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58301 00:06:27.567 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:27.567 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.567 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58301 00:06:27.567 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.567 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.567 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58301' 00:06:27.567 killing process with pid 58301 00:06:27.567 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58301 00:06:27.567 19:24:54 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58301 00:06:29.501 ************************************ 00:06:29.501 END TEST dpdk_mem_utility 00:06:29.501 ************************************ 00:06:29.501 00:06:29.501 real 0m2.897s 00:06:29.501 user 0m2.910s 00:06:29.501 sys 0m0.415s 00:06:29.501 19:24:56 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.501 19:24:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.501 19:24:56 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:29.501 19:24:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.501 19:24:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.501 19:24:56 -- common/autotest_common.sh@10 -- # set +x 00:06:29.501 ************************************ 00:06:29.501 START TEST event 00:06:29.501 ************************************ 00:06:29.501 19:24:56 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:29.501 * Looking for test storage... 00:06:29.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:29.501 19:24:56 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.501 19:24:56 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.501 19:24:56 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.501 19:24:56 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.502 19:24:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.502 19:24:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.502 19:24:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.502 19:24:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.502 19:24:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.502 19:24:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.502 19:24:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.502 19:24:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.502 19:24:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.502 19:24:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.502 19:24:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.502 19:24:56 event -- scripts/common.sh@344 -- # case "$op" in 00:06:29.502 19:24:56 event -- scripts/common.sh@345 -- # : 1 00:06:29.502 19:24:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.502 19:24:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.502 19:24:56 event -- scripts/common.sh@365 -- # decimal 1 00:06:29.502 19:24:56 event -- scripts/common.sh@353 -- # local d=1 00:06:29.502 19:24:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.502 19:24:56 event -- scripts/common.sh@355 -- # echo 1 00:06:29.502 19:24:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.502 19:24:56 event -- scripts/common.sh@366 -- # decimal 2 00:06:29.502 19:24:56 event -- scripts/common.sh@353 -- # local d=2 00:06:29.502 19:24:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.502 19:24:56 event -- scripts/common.sh@355 -- # echo 2 00:06:29.502 19:24:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.502 19:24:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.502 19:24:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.502 19:24:56 event -- scripts/common.sh@368 -- # return 0 00:06:29.502 19:24:56 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.502 19:24:56 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.502 --rc genhtml_branch_coverage=1 00:06:29.502 --rc genhtml_function_coverage=1 00:06:29.502 --rc genhtml_legend=1 00:06:29.502 --rc geninfo_all_blocks=1 00:06:29.502 --rc geninfo_unexecuted_blocks=1 00:06:29.502 00:06:29.502 ' 00:06:29.502 19:24:56 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.502 --rc genhtml_branch_coverage=1 00:06:29.502 --rc genhtml_function_coverage=1 00:06:29.502 --rc genhtml_legend=1 00:06:29.502 --rc geninfo_all_blocks=1 00:06:29.502 --rc geninfo_unexecuted_blocks=1 00:06:29.502 00:06:29.502 ' 00:06:29.502 19:24:56 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.502 --rc genhtml_branch_coverage=1 00:06:29.502 --rc genhtml_function_coverage=1 00:06:29.502 --rc genhtml_legend=1 00:06:29.502 --rc geninfo_all_blocks=1 00:06:29.502 --rc geninfo_unexecuted_blocks=1 00:06:29.502 00:06:29.502 ' 00:06:29.502 19:24:56 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.502 --rc genhtml_branch_coverage=1 00:06:29.502 --rc genhtml_function_coverage=1 00:06:29.502 --rc genhtml_legend=1 00:06:29.502 --rc geninfo_all_blocks=1 00:06:29.502 --rc geninfo_unexecuted_blocks=1 00:06:29.502 00:06:29.502 ' 00:06:29.502 19:24:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:29.502 19:24:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:29.502 19:24:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:29.502 19:24:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:29.502 19:24:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.502 19:24:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.502 ************************************ 00:06:29.502 START TEST event_perf 00:06:29.502 ************************************ 00:06:29.502 19:24:56 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:29.502 Running I/O for 1 seconds...[2024-12-05 19:24:56.457477] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:29.502 [2024-12-05 19:24:56.457697] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58398 ] 00:06:29.502 [2024-12-05 19:24:56.618631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.502 [2024-12-05 19:24:56.726470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.502 [2024-12-05 19:24:56.726790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.502 [2024-12-05 19:24:56.727410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.502 Running I/O for 1 seconds...[2024-12-05 19:24:56.727639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.891 00:06:30.891 lcore 0: 197019 00:06:30.891 lcore 1: 197019 00:06:30.891 lcore 2: 197019 00:06:30.891 lcore 3: 197018 00:06:30.891 done. 00:06:30.891 00:06:30.891 real 0m1.471s 00:06:30.891 ************************************ 00:06:30.891 user 0m4.257s 00:06:30.891 sys 0m0.088s 00:06:30.891 19:24:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.891 19:24:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.891 END TEST event_perf 00:06:30.891 ************************************ 00:06:30.892 19:24:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:30.892 19:24:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:30.892 19:24:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.892 19:24:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.892 ************************************ 00:06:30.892 START TEST event_reactor 00:06:30.892 ************************************ 00:06:30.892 19:24:57 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:30.892 [2024-12-05 19:24:57.997868] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:30.892 [2024-12-05 19:24:57.998171] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58436 ] 00:06:31.153 [2024-12-05 19:24:58.159960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.153 [2024-12-05 19:24:58.262631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.542 test_start 00:06:32.542 oneshot 00:06:32.542 tick 100 00:06:32.542 tick 100 00:06:32.542 tick 250 00:06:32.542 tick 100 00:06:32.542 tick 100 00:06:32.542 tick 100 00:06:32.542 tick 250 00:06:32.542 tick 500 00:06:32.542 tick 100 00:06:32.542 tick 100 00:06:32.542 tick 250 00:06:32.542 tick 100 00:06:32.542 tick 100 00:06:32.542 test_end 00:06:32.542 ************************************ 00:06:32.542 END TEST event_reactor 00:06:32.542 ************************************ 00:06:32.542 00:06:32.542 real 0m1.448s 00:06:32.542 user 0m1.279s 00:06:32.542 sys 0m0.059s 00:06:32.542 19:24:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.542 19:24:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:32.542 19:24:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.542 19:24:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:32.542 19:24:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.542 19:24:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.542 ************************************ 00:06:32.542 START TEST event_reactor_perf 00:06:32.542 ************************************ 00:06:32.542 19:24:59 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.542 [2024-12-05 19:24:59.516490] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:32.542 [2024-12-05 19:24:59.516632] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58474 ] 00:06:32.542 [2024-12-05 19:24:59.677762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.542 [2024-12-05 19:24:59.780564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.926 test_start 00:06:33.926 test_end 00:06:33.926 Performance: 314355 events per second 00:06:33.926 ************************************ 00:06:33.926 END TEST event_reactor_perf 00:06:33.926 ************************************ 00:06:33.926 00:06:33.926 real 0m1.451s 00:06:33.926 user 0m1.275s 00:06:33.926 sys 0m0.067s 00:06:33.926 19:25:00 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.926 19:25:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.926 19:25:00 event -- event/event.sh@49 -- # uname -s 00:06:33.926 19:25:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:33.926 19:25:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:33.926 19:25:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.926 19:25:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.926 19:25:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.926 ************************************ 00:06:33.926 START TEST event_scheduler 00:06:33.926 ************************************ 00:06:33.926 19:25:01 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:33.926 * Looking for test storage... 00:06:33.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:33.926 19:25:01 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:33.926 19:25:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:33.926 19:25:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:33.926 19:25:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:33.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.926 19:25:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:33.927 19:25:01 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.927 19:25:01 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:33.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.927 --rc genhtml_branch_coverage=1 00:06:33.927 --rc genhtml_function_coverage=1 00:06:33.927 --rc genhtml_legend=1 00:06:33.927 --rc geninfo_all_blocks=1 00:06:33.927 --rc geninfo_unexecuted_blocks=1 00:06:33.927 00:06:33.927 ' 00:06:33.927 19:25:01 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:33.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.927 --rc genhtml_branch_coverage=1 00:06:33.927 --rc genhtml_function_coverage=1 00:06:33.927 --rc genhtml_legend=1 00:06:33.927 --rc geninfo_all_blocks=1 00:06:33.927 --rc geninfo_unexecuted_blocks=1 00:06:33.927 00:06:33.927 ' 00:06:33.927 19:25:01 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:33.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.927 --rc genhtml_branch_coverage=1 00:06:33.927 --rc genhtml_function_coverage=1 00:06:33.927 --rc genhtml_legend=1 00:06:33.927 --rc geninfo_all_blocks=1 00:06:33.927 --rc geninfo_unexecuted_blocks=1 00:06:33.927 00:06:33.927 ' 00:06:33.927 19:25:01 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:33.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.927 --rc genhtml_branch_coverage=1 00:06:33.927 --rc genhtml_function_coverage=1 00:06:33.927 --rc genhtml_legend=1 00:06:33.927 --rc geninfo_all_blocks=1 00:06:33.927 --rc geninfo_unexecuted_blocks=1 00:06:33.927 00:06:33.927 ' 00:06:33.927 19:25:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:33.927 19:25:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58544 00:06:33.927 19:25:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.927 19:25:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58544 00:06:33.927 19:25:01 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58544 ']' 00:06:33.927 19:25:01 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.927 19:25:01 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.927 19:25:01 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.927 19:25:01 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.927 19:25:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:33.927 19:25:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.189 [2024-12-05 19:25:01.236454] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:34.189 [2024-12-05 19:25:01.236570] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58544 ] 00:06:34.189 [2024-12-05 19:25:01.395115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.449 [2024-12-05 19:25:01.504179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.449 [2024-12-05 19:25:01.504578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.449 [2024-12-05 19:25:01.504744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.449 [2024-12-05 19:25:01.504750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.022 19:25:02 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.022 19:25:02 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:35.022 19:25:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:35.022 19:25:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.022 19:25:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.022 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.022 POWER: Cannot set governor of lcore 0 to userspace 00:06:35.022 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.022 POWER: Cannot set governor of lcore 0 to performance 00:06:35.022 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.022 POWER: Cannot set governor of lcore 0 to userspace 00:06:35.022 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.022 POWER: Cannot set governor of lcore 0 to userspace 00:06:35.022 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:35.022 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:35.022 POWER: Unable to set Power Management Environment for lcore 0 00:06:35.022 [2024-12-05 19:25:02.098045] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:35.022 [2024-12-05 19:25:02.098068] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:35.022 [2024-12-05 19:25:02.098078] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:35.022 [2024-12-05 19:25:02.098097] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:35.022 [2024-12-05 19:25:02.098105] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:35.022 [2024-12-05 19:25:02.098114] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:35.022 19:25:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.022 19:25:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:35.022 19:25:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.022 19:25:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 [2024-12-05 19:25:02.329221] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:35.296 19:25:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.296 19:25:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:35.296 19:25:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.296 19:25:02 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 ************************************ 00:06:35.296 START TEST scheduler_create_thread 00:06:35.296 ************************************ 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 2 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 3 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 4 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 5 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 6 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 7 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 8 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 9 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 10 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.296 19:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.683 19:25:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.683 19:25:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:36.683 19:25:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:36.683 19:25:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.683 19:25:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.085 ************************************ 00:06:38.085 END TEST scheduler_create_thread 00:06:38.085 ************************************ 00:06:38.085 19:25:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.085 00:06:38.085 real 0m2.618s 00:06:38.085 user 0m0.017s 00:06:38.085 sys 0m0.008s 00:06:38.085 19:25:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.085 19:25:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.085 19:25:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:38.085 19:25:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58544 00:06:38.085 19:25:05 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58544 ']' 00:06:38.085 19:25:05 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58544 00:06:38.085 19:25:05 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:38.085 19:25:05 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.085 19:25:05 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58544 00:06:38.085 killing process with pid 58544 00:06:38.085 19:25:05 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:38.085 19:25:05 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:38.085 19:25:05 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58544' 00:06:38.085 19:25:05 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58544 00:06:38.085 19:25:05 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58544 00:06:38.344 [2024-12-05 19:25:05.444617] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:39.284 00:06:39.284 real 0m5.188s 00:06:39.285 user 0m9.124s 00:06:39.285 sys 0m0.351s 00:06:39.285 19:25:06 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.285 19:25:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.285 ************************************ 00:06:39.285 END TEST event_scheduler 00:06:39.285 ************************************ 00:06:39.285 19:25:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:39.285 19:25:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:39.285 19:25:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.285 19:25:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.285 19:25:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.285 ************************************ 00:06:39.285 START TEST app_repeat 00:06:39.285 ************************************ 00:06:39.285 19:25:06 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:39.285 Process app_repeat pid: 58645 00:06:39.285 spdk_app_start Round 0 00:06:39.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58645 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58645' 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58645 /var/tmp/spdk-nbd.sock 00:06:39.285 19:25:06 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:39.285 19:25:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58645 ']' 00:06:39.285 19:25:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.285 19:25:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.285 19:25:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.285 19:25:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.285 19:25:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.285 [2024-12-05 19:25:06.332641] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:39.285 [2024-12-05 19:25:06.332770] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58645 ] 00:06:39.285 [2024-12-05 19:25:06.493580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.546 [2024-12-05 19:25:06.604404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.546 [2024-12-05 19:25:06.604594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.133 19:25:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.133 19:25:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:40.133 19:25:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.422 Malloc0 00:06:40.422 19:25:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.683 Malloc1 00:06:40.683 19:25:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.683 19:25:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.944 /dev/nbd0 00:06:40.944 19:25:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.944 19:25:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.944 19:25:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:40.944 19:25:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:40.944 19:25:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.945 19:25:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.945 19:25:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:40.945 19:25:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:40.945 19:25:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.945 19:25:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.945 19:25:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.945 1+0 records in 00:06:40.945 1+0 records out 00:06:40.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682464 s, 6.0 MB/s 00:06:40.945 19:25:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.945 19:25:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:40.945 19:25:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.945 19:25:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.945 19:25:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:40.945 19:25:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.945 19:25:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.945 19:25:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.204 /dev/nbd1 00:06:41.204 19:25:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.204 19:25:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.204 1+0 records in 00:06:41.204 1+0 records out 00:06:41.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336475 s, 12.2 MB/s 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.204 19:25:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.204 19:25:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.204 19:25:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.204 19:25:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.204 19:25:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.204 19:25:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.464 { 00:06:41.464 "nbd_device": "/dev/nbd0", 00:06:41.464 "bdev_name": "Malloc0" 00:06:41.464 }, 00:06:41.464 { 00:06:41.464 "nbd_device": "/dev/nbd1", 00:06:41.464 "bdev_name": "Malloc1" 00:06:41.464 } 00:06:41.464 ]' 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.464 { 00:06:41.464 "nbd_device": "/dev/nbd0", 00:06:41.464 "bdev_name": "Malloc0" 00:06:41.464 }, 00:06:41.464 { 00:06:41.464 "nbd_device": "/dev/nbd1", 00:06:41.464 "bdev_name": "Malloc1" 00:06:41.464 } 00:06:41.464 ]' 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.464 /dev/nbd1' 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.464 /dev/nbd1' 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.464 256+0 records in 00:06:41.464 256+0 records out 00:06:41.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0076396 s, 137 MB/s 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.464 256+0 records in 00:06:41.464 256+0 records out 00:06:41.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258406 s, 40.6 MB/s 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.464 256+0 records in 00:06:41.464 256+0 records out 00:06:41.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213655 s, 49.1 MB/s 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.464 19:25:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.465 19:25:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.465 19:25:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.465 19:25:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.465 19:25:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.465 19:25:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.465 19:25:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.724 19:25:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.724 19:25:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.724 19:25:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.724 19:25:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.724 19:25:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.724 19:25:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.724 19:25:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.724 19:25:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.724 19:25:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.724 19:25:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.984 19:25:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.984 19:25:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.984 19:25:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.984 19:25:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.984 19:25:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.984 19:25:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.984 19:25:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.984 19:25:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.984 19:25:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.984 19:25:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.984 19:25:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.244 19:25:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.244 19:25:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.244 19:25:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.244 19:25:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.244 19:25:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.244 19:25:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.244 19:25:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.244 19:25:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.244 19:25:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.244 19:25:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.244 19:25:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.244 19:25:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.244 19:25:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.503 19:25:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.460 [2024-12-05 19:25:10.393651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.460 [2024-12-05 19:25:10.496153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.460 [2024-12-05 19:25:10.496299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.460 [2024-12-05 19:25:10.628022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.460 [2024-12-05 19:25:10.628116] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.021 spdk_app_start Round 1 00:06:46.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.021 19:25:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.021 19:25:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:46.021 19:25:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58645 /var/tmp/spdk-nbd.sock 00:06:46.021 19:25:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58645 ']' 00:06:46.021 19:25:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.021 19:25:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.021 19:25:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.021 19:25:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.021 19:25:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.021 19:25:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.021 19:25:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:46.021 19:25:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.021 Malloc0 00:06:46.021 19:25:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.317 Malloc1 00:06:46.317 19:25:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.317 19:25:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.579 /dev/nbd0 00:06:46.579 19:25:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.579 19:25:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.579 1+0 records in 00:06:46.579 1+0 records out 00:06:46.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349788 s, 11.7 MB/s 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.579 19:25:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:46.579 19:25:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.579 19:25:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.579 19:25:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.842 /dev/nbd1 00:06:46.842 19:25:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.842 19:25:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.842 1+0 records in 00:06:46.842 1+0 records out 00:06:46.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200441 s, 20.4 MB/s 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.842 19:25:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:46.842 19:25:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.842 19:25:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.842 19:25:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.842 19:25:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.842 19:25:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.103 { 00:06:47.103 "nbd_device": "/dev/nbd0", 00:06:47.103 "bdev_name": "Malloc0" 00:06:47.103 }, 00:06:47.103 { 00:06:47.103 "nbd_device": "/dev/nbd1", 00:06:47.103 "bdev_name": "Malloc1" 00:06:47.103 } 00:06:47.103 ]' 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.103 { 00:06:47.103 "nbd_device": "/dev/nbd0", 00:06:47.103 "bdev_name": "Malloc0" 00:06:47.103 }, 00:06:47.103 { 00:06:47.103 "nbd_device": "/dev/nbd1", 00:06:47.103 "bdev_name": "Malloc1" 00:06:47.103 } 00:06:47.103 ]' 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.103 /dev/nbd1' 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.103 /dev/nbd1' 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.103 256+0 records in 00:06:47.103 256+0 records out 00:06:47.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426834 s, 246 MB/s 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.103 256+0 records in 00:06:47.103 256+0 records out 00:06:47.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.109582 s, 9.6 MB/s 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.103 256+0 records in 00:06:47.103 256+0 records out 00:06:47.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237271 s, 44.2 MB/s 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.103 19:25:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.363 19:25:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.363 19:25:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.363 19:25:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.363 19:25:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.363 19:25:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.363 19:25:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.363 19:25:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.363 19:25:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.364 19:25:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.364 19:25:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.626 19:25:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.626 19:25:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.626 19:25:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.626 19:25:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.626 19:25:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.626 19:25:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.626 19:25:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.626 19:25:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.626 19:25:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.626 19:25:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.626 19:25:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.888 19:25:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.888 19:25:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.888 19:25:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.888 19:25:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.888 19:25:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.888 19:25:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.888 19:25:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.888 19:25:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.888 19:25:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.888 19:25:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.889 19:25:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.889 19:25:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.889 19:25:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.149 19:25:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:49.093 [2024-12-05 19:25:16.078884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.093 [2024-12-05 19:25:16.179169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.093 [2024-12-05 19:25:16.179347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.093 [2024-12-05 19:25:16.308461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:49.093 [2024-12-05 19:25:16.308541] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.634 spdk_app_start Round 2 00:06:51.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.634 19:25:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.634 19:25:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:51.634 19:25:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58645 /var/tmp/spdk-nbd.sock 00:06:51.634 19:25:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58645 ']' 00:06:51.634 19:25:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.634 19:25:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.634 19:25:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.634 19:25:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.634 19:25:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.634 19:25:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.634 19:25:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:51.634 19:25:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.634 Malloc0 00:06:51.634 19:25:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.894 Malloc1 00:06:51.894 19:25:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.894 19:25:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.204 /dev/nbd0 00:06:52.204 19:25:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.204 19:25:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.204 1+0 records in 00:06:52.204 1+0 records out 00:06:52.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037187 s, 11.0 MB/s 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.204 19:25:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:52.204 19:25:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.204 19:25:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.204 19:25:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.464 /dev/nbd1 00:06:52.464 19:25:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.465 19:25:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.465 1+0 records in 00:06:52.465 1+0 records out 00:06:52.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315173 s, 13.0 MB/s 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.465 19:25:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:52.465 19:25:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.465 19:25:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.465 19:25:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.465 19:25:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.465 19:25:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.724 { 00:06:52.724 "nbd_device": "/dev/nbd0", 00:06:52.724 "bdev_name": "Malloc0" 00:06:52.724 }, 00:06:52.724 { 00:06:52.724 "nbd_device": "/dev/nbd1", 00:06:52.724 "bdev_name": "Malloc1" 00:06:52.724 } 00:06:52.724 ]' 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.724 { 00:06:52.724 "nbd_device": "/dev/nbd0", 00:06:52.724 "bdev_name": "Malloc0" 00:06:52.724 }, 00:06:52.724 { 00:06:52.724 "nbd_device": "/dev/nbd1", 00:06:52.724 "bdev_name": "Malloc1" 00:06:52.724 } 00:06:52.724 ]' 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.724 /dev/nbd1' 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.724 /dev/nbd1' 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.724 256+0 records in 00:06:52.724 256+0 records out 00:06:52.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650427 s, 161 MB/s 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.724 256+0 records in 00:06:52.724 256+0 records out 00:06:52.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199684 s, 52.5 MB/s 00:06:52.724 19:25:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.725 256+0 records in 00:06:52.725 256+0 records out 00:06:52.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022247 s, 47.1 MB/s 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.725 19:25:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.984 19:25:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.984 19:25:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.984 19:25:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.984 19:25:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.984 19:25:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.984 19:25:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.984 19:25:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.984 19:25:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.984 19:25:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.984 19:25:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.245 19:25:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.245 19:25:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.245 19:25:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.245 19:25:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.245 19:25:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.245 19:25:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.245 19:25:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.245 19:25:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.245 19:25:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.245 19:25:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.245 19:25:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.245 19:25:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.505 19:25:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.505 19:25:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.505 19:25:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.505 19:25:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.505 19:25:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.505 19:25:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.505 19:25:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.505 19:25:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.505 19:25:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.505 19:25:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.505 19:25:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.505 19:25:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.765 19:25:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:54.334 [2024-12-05 19:25:21.578860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.595 [2024-12-05 19:25:21.679200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.595 [2024-12-05 19:25:21.679368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.595 [2024-12-05 19:25:21.808184] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.595 [2024-12-05 19:25:21.808262] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.146 19:25:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58645 /var/tmp/spdk-nbd.sock 00:06:57.146 19:25:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58645 ']' 00:06:57.146 19:25:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.146 19:25:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.146 19:25:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.146 19:25:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.146 19:25:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.146 19:25:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.146 19:25:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:57.146 19:25:24 event.app_repeat -- event/event.sh@39 -- # killprocess 58645 00:06:57.146 19:25:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58645 ']' 00:06:57.146 19:25:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58645 00:06:57.146 19:25:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:57.146 19:25:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.146 19:25:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58645 00:06:57.146 killing process with pid 58645 00:06:57.146 19:25:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.146 19:25:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.146 19:25:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58645' 00:06:57.146 19:25:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58645 00:06:57.146 19:25:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58645 00:06:57.719 spdk_app_start is called in Round 0. 00:06:57.719 Shutdown signal received, stop current app iteration 00:06:57.719 Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 reinitialization... 00:06:57.719 spdk_app_start is called in Round 1. 00:06:57.719 Shutdown signal received, stop current app iteration 00:06:57.719 Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 reinitialization... 00:06:57.719 spdk_app_start is called in Round 2. 00:06:57.719 Shutdown signal received, stop current app iteration 00:06:57.719 Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 reinitialization... 00:06:57.719 spdk_app_start is called in Round 3. 00:06:57.719 Shutdown signal received, stop current app iteration 00:06:57.719 ************************************ 00:06:57.719 END TEST app_repeat 00:06:57.719 ************************************ 00:06:57.719 19:25:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:57.719 19:25:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:57.719 00:06:57.719 real 0m18.474s 00:06:57.719 user 0m40.321s 00:06:57.719 sys 0m2.231s 00:06:57.719 19:25:24 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.719 19:25:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.719 19:25:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:57.719 19:25:24 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:57.719 19:25:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.719 19:25:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.719 19:25:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.719 ************************************ 00:06:57.719 START TEST cpu_locks 00:06:57.719 ************************************ 00:06:57.719 19:25:24 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:57.719 * Looking for test storage... 00:06:57.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:57.719 19:25:24 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:57.719 19:25:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:57.719 19:25:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:57.719 19:25:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:57.719 19:25:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.982 19:25:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:57.982 19:25:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:57.982 19:25:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.982 19:25:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:57.982 19:25:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.982 19:25:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.982 19:25:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.982 19:25:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:57.982 19:25:24 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.982 19:25:24 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:57.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.982 --rc genhtml_branch_coverage=1 00:06:57.982 --rc genhtml_function_coverage=1 00:06:57.982 --rc genhtml_legend=1 00:06:57.982 --rc geninfo_all_blocks=1 00:06:57.982 --rc geninfo_unexecuted_blocks=1 00:06:57.982 00:06:57.982 ' 00:06:57.982 19:25:24 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:57.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.982 --rc genhtml_branch_coverage=1 00:06:57.982 --rc genhtml_function_coverage=1 00:06:57.982 --rc genhtml_legend=1 00:06:57.982 --rc geninfo_all_blocks=1 00:06:57.982 --rc geninfo_unexecuted_blocks=1 00:06:57.982 00:06:57.982 ' 00:06:57.982 19:25:24 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:57.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.982 --rc genhtml_branch_coverage=1 00:06:57.982 --rc genhtml_function_coverage=1 00:06:57.982 --rc genhtml_legend=1 00:06:57.982 --rc geninfo_all_blocks=1 00:06:57.982 --rc geninfo_unexecuted_blocks=1 00:06:57.982 00:06:57.982 ' 00:06:57.982 19:25:24 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:57.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.982 --rc genhtml_branch_coverage=1 00:06:57.982 --rc genhtml_function_coverage=1 00:06:57.982 --rc genhtml_legend=1 00:06:57.982 --rc geninfo_all_blocks=1 00:06:57.982 --rc geninfo_unexecuted_blocks=1 00:06:57.982 00:06:57.982 ' 00:06:57.982 19:25:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:57.982 19:25:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:57.982 19:25:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:57.982 19:25:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:57.982 19:25:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.982 19:25:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.982 19:25:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.982 ************************************ 00:06:57.982 START TEST default_locks 00:06:57.982 ************************************ 00:06:57.982 19:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:57.982 19:25:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59087 00:06:57.982 19:25:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59087 00:06:57.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.982 19:25:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.982 19:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59087 ']' 00:06:57.982 19:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.982 19:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.982 19:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.982 19:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.982 19:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.982 [2024-12-05 19:25:25.067921] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:06:57.982 [2024-12-05 19:25:25.068046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59087 ] 00:06:57.982 [2024-12-05 19:25:25.228722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.241 [2024-12-05 19:25:25.329476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.811 19:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.811 19:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:58.811 19:25:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59087 00:06:58.811 19:25:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59087 00:06:58.811 19:25:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.072 19:25:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59087 00:06:59.072 19:25:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59087 ']' 00:06:59.072 19:25:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59087 00:06:59.072 19:25:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:59.072 19:25:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.072 19:25:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59087 00:06:59.072 19:25:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.072 19:25:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.072 killing process with pid 59087 00:06:59.072 19:25:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59087' 00:06:59.072 19:25:26 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59087 00:06:59.072 19:25:26 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59087 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59087 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59087 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:00.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59087 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59087 ']' 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.467 ERROR: process (pid: 59087) is no longer running 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.467 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59087) - No such process 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:00.467 00:07:00.467 real 0m2.695s 00:07:00.467 user 0m2.686s 00:07:00.467 sys 0m0.461s 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.467 19:25:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.467 ************************************ 00:07:00.467 END TEST default_locks 00:07:00.467 ************************************ 00:07:00.727 19:25:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:00.727 19:25:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.727 19:25:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.727 19:25:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.727 ************************************ 00:07:00.727 START TEST default_locks_via_rpc 00:07:00.727 ************************************ 00:07:00.727 19:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:00.727 19:25:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59145 00:07:00.727 19:25:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59145 00:07:00.727 19:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59145 ']' 00:07:00.727 19:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.727 19:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.727 19:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.727 19:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.727 19:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.727 19:25:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.727 [2024-12-05 19:25:27.828742] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:00.727 [2024-12-05 19:25:27.828868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59145 ] 00:07:00.988 [2024-12-05 19:25:27.990893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.988 [2024-12-05 19:25:28.093400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59145 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59145 00:07:01.560 19:25:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.820 19:25:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59145 00:07:01.820 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59145 ']' 00:07:01.820 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59145 00:07:01.820 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:01.820 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.820 19:25:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59145 00:07:01.820 19:25:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.820 19:25:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.820 killing process with pid 59145 00:07:01.820 19:25:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59145' 00:07:01.820 19:25:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59145 00:07:01.820 19:25:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59145 00:07:03.766 00:07:03.766 real 0m2.801s 00:07:03.766 user 0m2.808s 00:07:03.766 sys 0m0.497s 00:07:03.766 19:25:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.766 ************************************ 00:07:03.766 END TEST default_locks_via_rpc 00:07:03.766 ************************************ 00:07:03.766 19:25:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.766 19:25:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:03.766 19:25:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.766 19:25:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.766 19:25:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.766 ************************************ 00:07:03.766 START TEST non_locking_app_on_locked_coremask 00:07:03.766 ************************************ 00:07:03.766 19:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:03.766 19:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59208 00:07:03.766 19:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59208 /var/tmp/spdk.sock 00:07:03.766 19:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.766 19:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59208 ']' 00:07:03.766 19:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.766 19:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.766 19:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.766 19:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.766 19:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.766 [2024-12-05 19:25:30.717590] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:03.766 [2024-12-05 19:25:30.717802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59208 ] 00:07:03.766 [2024-12-05 19:25:30.887524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.058 [2024-12-05 19:25:30.990200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.626 19:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.627 19:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:04.627 19:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59224 00:07:04.627 19:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59224 /var/tmp/spdk2.sock 00:07:04.627 19:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59224 ']' 00:07:04.627 19:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.627 19:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.627 19:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:04.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.627 19:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.627 19:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.627 19:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.627 [2024-12-05 19:25:31.729172] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:04.627 [2024-12-05 19:25:31.729292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59224 ] 00:07:04.888 [2024-12-05 19:25:31.904874] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.888 [2024-12-05 19:25:31.904925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.888 [2024-12-05 19:25:32.108301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.274 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.274 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:06.274 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59208 00:07:06.274 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59208 00:07:06.274 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.535 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59208 00:07:06.535 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59208 ']' 00:07:06.535 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59208 00:07:06.535 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:06.535 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.535 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59208 00:07:06.535 killing process with pid 59208 00:07:06.535 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.535 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.535 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59208' 00:07:06.535 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59208 00:07:06.535 19:25:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59208 00:07:09.831 19:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59224 00:07:09.831 19:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59224 ']' 00:07:09.831 19:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59224 00:07:09.831 19:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:09.831 19:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.831 19:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59224 00:07:09.831 19:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.831 killing process with pid 59224 00:07:09.831 19:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.831 19:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59224' 00:07:09.831 19:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59224 00:07:09.831 19:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59224 00:07:11.299 00:07:11.299 real 0m7.620s 00:07:11.299 user 0m7.937s 00:07:11.299 sys 0m0.876s 00:07:11.299 19:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.299 ************************************ 00:07:11.299 END TEST non_locking_app_on_locked_coremask 00:07:11.299 ************************************ 00:07:11.299 19:25:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.299 19:25:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:11.299 19:25:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.299 19:25:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.299 19:25:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.299 ************************************ 00:07:11.299 START TEST locking_app_on_unlocked_coremask 00:07:11.299 ************************************ 00:07:11.299 19:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:11.299 19:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59332 00:07:11.299 19:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59332 /var/tmp/spdk.sock 00:07:11.299 19:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59332 ']' 00:07:11.299 19:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.299 19:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.299 19:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.299 19:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.299 19:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:11.299 19:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.299 [2024-12-05 19:25:38.370200] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:11.299 [2024-12-05 19:25:38.370331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59332 ] 00:07:11.561 [2024-12-05 19:25:38.534383] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.561 [2024-12-05 19:25:38.534456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.561 [2024-12-05 19:25:38.664751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.541 19:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.541 19:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:12.541 19:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59348 00:07:12.541 19:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59348 /var/tmp/spdk2.sock 00:07:12.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.541 19:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59348 ']' 00:07:12.541 19:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.541 19:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.541 19:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.541 19:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:12.541 19:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.541 19:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.541 [2024-12-05 19:25:39.506788] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:12.541 [2024-12-05 19:25:39.506941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59348 ] 00:07:12.541 [2024-12-05 19:25:39.691088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.803 [2024-12-05 19:25:39.974819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59348 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59348 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59332 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59332 ']' 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59332 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59332 00:07:15.349 killing process with pid 59332 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59332' 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59332 00:07:15.349 19:25:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59332 00:07:19.559 19:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59348 00:07:19.559 19:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59348 ']' 00:07:19.559 19:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59348 00:07:19.559 19:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:19.559 19:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.560 19:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59348 00:07:19.560 19:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.560 19:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.560 killing process with pid 59348 00:07:19.560 19:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59348' 00:07:19.560 19:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59348 00:07:19.560 19:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59348 00:07:20.953 00:07:20.953 real 0m9.624s 00:07:20.953 user 0m9.932s 00:07:20.953 sys 0m1.171s 00:07:20.953 19:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.953 ************************************ 00:07:20.953 END TEST locking_app_on_unlocked_coremask 00:07:20.953 ************************************ 00:07:20.953 19:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.953 19:25:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:20.953 19:25:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.953 19:25:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.953 19:25:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.953 ************************************ 00:07:20.953 START TEST locking_app_on_locked_coremask 00:07:20.953 ************************************ 00:07:20.953 19:25:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:20.953 19:25:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59474 00:07:20.953 19:25:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59474 /var/tmp/spdk.sock 00:07:20.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.953 19:25:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59474 ']' 00:07:20.953 19:25:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.953 19:25:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.953 19:25:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.953 19:25:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.953 19:25:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.953 19:25:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.953 [2024-12-05 19:25:48.078523] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:20.953 [2024-12-05 19:25:48.078694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59474 ] 00:07:21.215 [2024-12-05 19:25:48.242348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.215 [2024-12-05 19:25:48.387104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.160 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.160 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:22.160 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59495 00:07:22.160 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:22.160 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59495 /var/tmp/spdk2.sock 00:07:22.160 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:22.160 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59495 /var/tmp/spdk2.sock 00:07:22.161 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:22.161 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.161 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:22.161 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.161 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59495 /var/tmp/spdk2.sock 00:07:22.161 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59495 ']' 00:07:22.161 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.161 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.161 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.161 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.161 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.161 [2024-12-05 19:25:49.231356] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:22.161 [2024-12-05 19:25:49.231521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59495 ] 00:07:22.423 [2024-12-05 19:25:49.415205] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59474 has claimed it. 00:07:22.423 [2024-12-05 19:25:49.415320] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:22.686 ERROR: process (pid: 59495) is no longer running 00:07:22.686 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59495) - No such process 00:07:22.686 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.686 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:22.686 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:22.686 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.686 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.686 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.686 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59474 00:07:22.686 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59474 00:07:22.686 19:25:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.948 19:25:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59474 00:07:22.948 19:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59474 ']' 00:07:22.948 19:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59474 00:07:22.948 19:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:22.948 19:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.948 19:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59474 00:07:22.948 19:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.948 19:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.948 killing process with pid 59474 00:07:22.948 19:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59474' 00:07:22.948 19:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59474 00:07:22.948 19:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59474 00:07:24.864 00:07:24.864 real 0m3.873s 00:07:24.864 user 0m3.995s 00:07:24.864 sys 0m0.760s 00:07:24.864 19:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.864 19:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.864 ************************************ 00:07:24.864 END TEST locking_app_on_locked_coremask 00:07:24.864 ************************************ 00:07:24.864 19:25:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:24.864 19:25:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.864 19:25:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.864 19:25:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.864 ************************************ 00:07:24.864 START TEST locking_overlapped_coremask 00:07:24.864 ************************************ 00:07:24.864 19:25:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:24.864 19:25:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59554 00:07:24.864 19:25:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59554 /var/tmp/spdk.sock 00:07:24.864 19:25:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59554 ']' 00:07:24.864 19:25:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.864 19:25:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.864 19:25:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.864 19:25:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.864 19:25:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.864 19:25:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:24.864 [2024-12-05 19:25:52.038262] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:24.864 [2024-12-05 19:25:52.038448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59554 ] 00:07:25.124 [2024-12-05 19:25:52.209917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.124 [2024-12-05 19:25:52.356832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.124 [2024-12-05 19:25:52.357374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.124 [2024-12-05 19:25:52.357392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59572 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59572 /var/tmp/spdk2.sock 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59572 /var/tmp/spdk2.sock 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59572 /var/tmp/spdk2.sock 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59572 ']' 00:07:26.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.065 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.065 [2024-12-05 19:25:53.203135] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:26.065 [2024-12-05 19:25:53.203850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59572 ] 00:07:26.326 [2024-12-05 19:25:53.411274] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59554 has claimed it. 00:07:26.326 [2024-12-05 19:25:53.411345] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:26.897 ERROR: process (pid: 59572) is no longer running 00:07:26.897 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59572) - No such process 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59554 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59554 ']' 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59554 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59554 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.897 killing process with pid 59554 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59554' 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59554 00:07:26.897 19:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59554 00:07:28.815 00:07:28.815 real 0m3.721s 00:07:28.815 user 0m10.023s 00:07:28.815 sys 0m0.639s 00:07:28.815 19:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.815 19:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.815 ************************************ 00:07:28.815 END TEST locking_overlapped_coremask 00:07:28.815 ************************************ 00:07:28.815 19:25:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:28.815 19:25:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.815 19:25:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.815 19:25:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.815 ************************************ 00:07:28.815 START TEST locking_overlapped_coremask_via_rpc 00:07:28.815 ************************************ 00:07:28.815 19:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:28.815 19:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59636 00:07:28.815 19:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59636 /var/tmp/spdk.sock 00:07:28.815 19:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59636 ']' 00:07:28.815 19:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.815 19:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.815 19:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.815 19:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.815 19:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.815 19:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:28.815 [2024-12-05 19:25:55.838075] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:28.815 [2024-12-05 19:25:55.838233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59636 ] 00:07:28.815 [2024-12-05 19:25:56.007659] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.815 [2024-12-05 19:25:56.007749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.076 [2024-12-05 19:25:56.151199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.076 [2024-12-05 19:25:56.151720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.076 [2024-12-05 19:25:56.151822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.031 19:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.031 19:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:30.031 19:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59654 00:07:30.031 19:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59654 /var/tmp/spdk2.sock 00:07:30.031 19:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59654 ']' 00:07:30.031 19:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.031 19:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.031 19:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.031 19:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.031 19:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.031 19:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:30.031 [2024-12-05 19:25:57.002287] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:30.031 [2024-12-05 19:25:57.002450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59654 ] 00:07:30.031 [2024-12-05 19:25:57.186682] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.031 [2024-12-05 19:25:57.186768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.293 [2024-12-05 19:25:57.483812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.293 [2024-12-05 19:25:57.487060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.293 [2024-12-05 19:25:57.487097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.841 [2024-12-05 19:25:59.617910] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59636 has claimed it. 00:07:32.841 request: 00:07:32.841 { 00:07:32.841 "method": "framework_enable_cpumask_locks", 00:07:32.841 "req_id": 1 00:07:32.841 } 00:07:32.841 Got JSON-RPC error response 00:07:32.841 response: 00:07:32.841 { 00:07:32.841 "code": -32603, 00:07:32.841 "message": "Failed to claim CPU core: 2" 00:07:32.841 } 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59636 /var/tmp/spdk.sock 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59636 ']' 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.841 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:32.842 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59654 /var/tmp/spdk2.sock 00:07:32.842 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59654 ']' 00:07:32.842 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.842 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.842 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.842 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.842 19:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.842 19:26:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.842 19:26:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:32.842 19:26:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:32.842 ************************************ 00:07:32.842 END TEST locking_overlapped_coremask_via_rpc 00:07:32.842 ************************************ 00:07:32.842 19:26:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:32.842 19:26:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:32.842 19:26:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:32.842 00:07:32.842 real 0m4.342s 00:07:32.842 user 0m1.391s 00:07:32.842 sys 0m0.196s 00:07:32.842 19:26:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.842 19:26:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.104 19:26:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:33.104 19:26:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59636 ]] 00:07:33.104 19:26:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59636 00:07:33.104 19:26:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59636 ']' 00:07:33.104 19:26:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59636 00:07:33.104 19:26:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:33.104 19:26:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.104 19:26:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59636 00:07:33.104 killing process with pid 59636 00:07:33.104 19:26:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.104 19:26:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.104 19:26:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59636' 00:07:33.104 19:26:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59636 00:07:33.104 19:26:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59636 00:07:35.022 19:26:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59654 ]] 00:07:35.023 19:26:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59654 00:07:35.023 19:26:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59654 ']' 00:07:35.023 19:26:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59654 00:07:35.023 19:26:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:35.023 19:26:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.023 19:26:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59654 00:07:35.023 killing process with pid 59654 00:07:35.023 19:26:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:35.023 19:26:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:35.023 19:26:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59654' 00:07:35.023 19:26:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59654 00:07:35.023 19:26:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59654 00:07:36.937 19:26:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:36.937 Process with pid 59636 is not found 00:07:36.937 19:26:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:36.937 19:26:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59636 ]] 00:07:36.937 19:26:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59636 00:07:36.937 19:26:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59636 ']' 00:07:36.937 19:26:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59636 00:07:36.937 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59636) - No such process 00:07:36.937 19:26:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59636 is not found' 00:07:36.937 19:26:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59654 ]] 00:07:36.937 19:26:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59654 00:07:36.937 Process with pid 59654 is not found 00:07:36.937 19:26:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59654 ']' 00:07:36.937 19:26:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59654 00:07:36.937 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59654) - No such process 00:07:36.937 19:26:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59654 is not found' 00:07:36.937 19:26:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:36.937 00:07:36.937 real 0m38.886s 00:07:36.937 user 1m9.100s 00:07:36.937 sys 0m5.846s 00:07:36.937 ************************************ 00:07:36.937 19:26:03 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.937 19:26:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.937 END TEST cpu_locks 00:07:36.937 ************************************ 00:07:36.937 00:07:36.937 real 1m7.500s 00:07:36.937 user 2m5.550s 00:07:36.937 sys 0m8.880s 00:07:36.937 19:26:03 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.937 ************************************ 00:07:36.937 END TEST event 00:07:36.937 ************************************ 00:07:36.937 19:26:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:36.937 19:26:03 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:36.937 19:26:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.937 19:26:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.937 19:26:03 -- common/autotest_common.sh@10 -- # set +x 00:07:36.937 ************************************ 00:07:36.937 START TEST thread 00:07:36.937 ************************************ 00:07:36.937 19:26:03 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:36.937 * Looking for test storage... 00:07:36.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:36.937 19:26:03 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:36.937 19:26:03 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:36.937 19:26:03 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:36.937 19:26:03 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:36.937 19:26:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.937 19:26:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.937 19:26:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.937 19:26:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.937 19:26:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.937 19:26:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.937 19:26:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.937 19:26:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.937 19:26:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.937 19:26:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.937 19:26:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.937 19:26:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:36.937 19:26:03 thread -- scripts/common.sh@345 -- # : 1 00:07:36.937 19:26:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.937 19:26:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.937 19:26:03 thread -- scripts/common.sh@365 -- # decimal 1 00:07:36.937 19:26:03 thread -- scripts/common.sh@353 -- # local d=1 00:07:36.937 19:26:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.937 19:26:04 thread -- scripts/common.sh@355 -- # echo 1 00:07:36.937 19:26:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.937 19:26:04 thread -- scripts/common.sh@366 -- # decimal 2 00:07:36.937 19:26:04 thread -- scripts/common.sh@353 -- # local d=2 00:07:36.937 19:26:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.937 19:26:04 thread -- scripts/common.sh@355 -- # echo 2 00:07:36.937 19:26:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.937 19:26:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.937 19:26:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.937 19:26:04 thread -- scripts/common.sh@368 -- # return 0 00:07:36.937 19:26:04 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.937 19:26:04 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:36.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.937 --rc genhtml_branch_coverage=1 00:07:36.937 --rc genhtml_function_coverage=1 00:07:36.937 --rc genhtml_legend=1 00:07:36.937 --rc geninfo_all_blocks=1 00:07:36.937 --rc geninfo_unexecuted_blocks=1 00:07:36.937 00:07:36.937 ' 00:07:36.937 19:26:04 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:36.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.937 --rc genhtml_branch_coverage=1 00:07:36.937 --rc genhtml_function_coverage=1 00:07:36.937 --rc genhtml_legend=1 00:07:36.937 --rc geninfo_all_blocks=1 00:07:36.937 --rc geninfo_unexecuted_blocks=1 00:07:36.937 00:07:36.937 ' 00:07:36.937 19:26:04 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:36.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.937 --rc genhtml_branch_coverage=1 00:07:36.937 --rc genhtml_function_coverage=1 00:07:36.937 --rc genhtml_legend=1 00:07:36.937 --rc geninfo_all_blocks=1 00:07:36.937 --rc geninfo_unexecuted_blocks=1 00:07:36.937 00:07:36.937 ' 00:07:36.937 19:26:04 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:36.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.937 --rc genhtml_branch_coverage=1 00:07:36.937 --rc genhtml_function_coverage=1 00:07:36.937 --rc genhtml_legend=1 00:07:36.937 --rc geninfo_all_blocks=1 00:07:36.937 --rc geninfo_unexecuted_blocks=1 00:07:36.937 00:07:36.937 ' 00:07:36.937 19:26:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:36.937 19:26:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:36.937 19:26:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.937 19:26:04 thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.937 ************************************ 00:07:36.937 START TEST thread_poller_perf 00:07:36.937 ************************************ 00:07:36.937 19:26:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:36.937 [2024-12-05 19:26:04.061537] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:36.937 [2024-12-05 19:26:04.061754] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59838 ] 00:07:37.198 [2024-12-05 19:26:04.229097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.198 [2024-12-05 19:26:04.370071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.198 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:38.579 [2024-12-05T19:26:05.834Z] ====================================== 00:07:38.579 [2024-12-05T19:26:05.834Z] busy:2612094342 (cyc) 00:07:38.579 [2024-12-05T19:26:05.834Z] total_run_count: 305000 00:07:38.579 [2024-12-05T19:26:05.834Z] tsc_hz: 2600000000 (cyc) 00:07:38.579 [2024-12-05T19:26:05.834Z] ====================================== 00:07:38.579 [2024-12-05T19:26:05.834Z] poller_cost: 8564 (cyc), 3293 (nsec) 00:07:38.579 ************************************ 00:07:38.579 END TEST thread_poller_perf 00:07:38.579 ************************************ 00:07:38.579 00:07:38.579 real 0m1.535s 00:07:38.579 user 0m1.328s 00:07:38.579 sys 0m0.095s 00:07:38.579 19:26:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.579 19:26:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:38.579 19:26:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:38.579 19:26:05 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:38.579 19:26:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.579 19:26:05 thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.579 ************************************ 00:07:38.579 START TEST thread_poller_perf 00:07:38.579 ************************************ 00:07:38.579 19:26:05 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:38.579 [2024-12-05 19:26:05.678136] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:38.579 [2024-12-05 19:26:05.678314] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59869 ] 00:07:38.839 [2024-12-05 19:26:05.844795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.839 [2024-12-05 19:26:05.990578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.839 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:40.248 [2024-12-05T19:26:07.503Z] ====================================== 00:07:40.248 [2024-12-05T19:26:07.503Z] busy:2603517462 (cyc) 00:07:40.248 [2024-12-05T19:26:07.503Z] total_run_count: 3561000 00:07:40.248 [2024-12-05T19:26:07.503Z] tsc_hz: 2600000000 (cyc) 00:07:40.248 [2024-12-05T19:26:07.503Z] ====================================== 00:07:40.248 [2024-12-05T19:26:07.503Z] poller_cost: 731 (cyc), 281 (nsec) 00:07:40.248 00:07:40.248 real 0m1.534s 00:07:40.248 user 0m1.322s 00:07:40.248 sys 0m0.099s 00:07:40.248 19:26:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.248 ************************************ 00:07:40.248 END TEST thread_poller_perf 00:07:40.248 ************************************ 00:07:40.248 19:26:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:40.248 19:26:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:40.248 00:07:40.248 real 0m3.387s 00:07:40.248 user 0m2.764s 00:07:40.248 sys 0m0.349s 00:07:40.248 19:26:07 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.248 19:26:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.248 ************************************ 00:07:40.248 END TEST thread 00:07:40.248 ************************************ 00:07:40.248 19:26:07 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:40.248 19:26:07 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:40.248 19:26:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.248 19:26:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.248 19:26:07 -- common/autotest_common.sh@10 -- # set +x 00:07:40.248 ************************************ 00:07:40.248 START TEST app_cmdline 00:07:40.248 ************************************ 00:07:40.248 19:26:07 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:40.248 * Looking for test storage... 00:07:40.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:40.248 19:26:07 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.248 19:26:07 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.248 19:26:07 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.248 19:26:07 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.248 19:26:07 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:40.248 19:26:07 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.248 19:26:07 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.248 --rc genhtml_branch_coverage=1 00:07:40.248 --rc genhtml_function_coverage=1 00:07:40.248 --rc genhtml_legend=1 00:07:40.248 --rc geninfo_all_blocks=1 00:07:40.248 --rc geninfo_unexecuted_blocks=1 00:07:40.248 00:07:40.248 ' 00:07:40.248 19:26:07 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.248 --rc genhtml_branch_coverage=1 00:07:40.248 --rc genhtml_function_coverage=1 00:07:40.248 --rc genhtml_legend=1 00:07:40.248 --rc geninfo_all_blocks=1 00:07:40.248 --rc geninfo_unexecuted_blocks=1 00:07:40.248 00:07:40.248 ' 00:07:40.248 19:26:07 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.248 --rc genhtml_branch_coverage=1 00:07:40.248 --rc genhtml_function_coverage=1 00:07:40.248 --rc genhtml_legend=1 00:07:40.248 --rc geninfo_all_blocks=1 00:07:40.248 --rc geninfo_unexecuted_blocks=1 00:07:40.248 00:07:40.248 ' 00:07:40.248 19:26:07 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.248 --rc genhtml_branch_coverage=1 00:07:40.248 --rc genhtml_function_coverage=1 00:07:40.248 --rc genhtml_legend=1 00:07:40.248 --rc geninfo_all_blocks=1 00:07:40.248 --rc geninfo_unexecuted_blocks=1 00:07:40.248 00:07:40.249 ' 00:07:40.249 19:26:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:40.249 19:26:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59958 00:07:40.249 19:26:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59958 00:07:40.249 19:26:07 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59958 ']' 00:07:40.249 19:26:07 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.249 19:26:07 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.249 19:26:07 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.249 19:26:07 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.249 19:26:07 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:40.249 19:26:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:40.508 [2024-12-05 19:26:07.563361] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:40.508 [2024-12-05 19:26:07.563802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59958 ] 00:07:40.508 [2024-12-05 19:26:07.732178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.770 [2024-12-05 19:26:07.877173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.713 19:26:08 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.713 19:26:08 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:41.713 19:26:08 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:41.713 { 00:07:41.713 "version": "SPDK v25.01-pre git sha1 e2dfdf06c", 00:07:41.713 "fields": { 00:07:41.713 "major": 25, 00:07:41.713 "minor": 1, 00:07:41.713 "patch": 0, 00:07:41.713 "suffix": "-pre", 00:07:41.713 "commit": "e2dfdf06c" 00:07:41.713 } 00:07:41.713 } 00:07:41.713 19:26:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:41.713 19:26:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:41.713 19:26:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:41.713 19:26:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:41.713 19:26:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:41.714 19:26:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:41.714 19:26:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.714 19:26:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:41.714 19:26:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:41.714 19:26:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:41.714 19:26:08 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:41.976 request: 00:07:41.976 { 00:07:41.976 "method": "env_dpdk_get_mem_stats", 00:07:41.976 "req_id": 1 00:07:41.976 } 00:07:41.976 Got JSON-RPC error response 00:07:41.976 response: 00:07:41.976 { 00:07:41.976 "code": -32601, 00:07:41.976 "message": "Method not found" 00:07:41.976 } 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.976 19:26:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59958 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59958 ']' 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59958 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59958 00:07:41.976 killing process with pid 59958 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59958' 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@973 -- # kill 59958 00:07:41.976 19:26:09 app_cmdline -- common/autotest_common.sh@978 -- # wait 59958 00:07:43.891 00:07:43.891 real 0m3.633s 00:07:43.891 user 0m3.838s 00:07:43.891 sys 0m0.623s 00:07:43.891 19:26:10 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.891 19:26:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.891 ************************************ 00:07:43.891 END TEST app_cmdline 00:07:43.891 ************************************ 00:07:43.891 19:26:10 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:43.891 19:26:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.891 19:26:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.891 19:26:10 -- common/autotest_common.sh@10 -- # set +x 00:07:43.891 ************************************ 00:07:43.891 START TEST version 00:07:43.891 ************************************ 00:07:43.891 19:26:11 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:43.891 * Looking for test storage... 00:07:43.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:43.891 19:26:11 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:43.891 19:26:11 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:43.891 19:26:11 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:44.151 19:26:11 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:44.151 19:26:11 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.151 19:26:11 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.151 19:26:11 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.151 19:26:11 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.151 19:26:11 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.151 19:26:11 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.151 19:26:11 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.151 19:26:11 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.151 19:26:11 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.151 19:26:11 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.151 19:26:11 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.151 19:26:11 version -- scripts/common.sh@344 -- # case "$op" in 00:07:44.151 19:26:11 version -- scripts/common.sh@345 -- # : 1 00:07:44.151 19:26:11 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.151 19:26:11 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.151 19:26:11 version -- scripts/common.sh@365 -- # decimal 1 00:07:44.151 19:26:11 version -- scripts/common.sh@353 -- # local d=1 00:07:44.151 19:26:11 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.151 19:26:11 version -- scripts/common.sh@355 -- # echo 1 00:07:44.151 19:26:11 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.151 19:26:11 version -- scripts/common.sh@366 -- # decimal 2 00:07:44.151 19:26:11 version -- scripts/common.sh@353 -- # local d=2 00:07:44.151 19:26:11 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.151 19:26:11 version -- scripts/common.sh@355 -- # echo 2 00:07:44.151 19:26:11 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.151 19:26:11 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.151 19:26:11 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.151 19:26:11 version -- scripts/common.sh@368 -- # return 0 00:07:44.151 19:26:11 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.151 19:26:11 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:44.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.152 --rc genhtml_branch_coverage=1 00:07:44.152 --rc genhtml_function_coverage=1 00:07:44.152 --rc genhtml_legend=1 00:07:44.152 --rc geninfo_all_blocks=1 00:07:44.152 --rc geninfo_unexecuted_blocks=1 00:07:44.152 00:07:44.152 ' 00:07:44.152 19:26:11 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:44.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.152 --rc genhtml_branch_coverage=1 00:07:44.152 --rc genhtml_function_coverage=1 00:07:44.152 --rc genhtml_legend=1 00:07:44.152 --rc geninfo_all_blocks=1 00:07:44.152 --rc geninfo_unexecuted_blocks=1 00:07:44.152 00:07:44.152 ' 00:07:44.152 19:26:11 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:44.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.152 --rc genhtml_branch_coverage=1 00:07:44.152 --rc genhtml_function_coverage=1 00:07:44.152 --rc genhtml_legend=1 00:07:44.152 --rc geninfo_all_blocks=1 00:07:44.152 --rc geninfo_unexecuted_blocks=1 00:07:44.152 00:07:44.152 ' 00:07:44.152 19:26:11 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:44.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.152 --rc genhtml_branch_coverage=1 00:07:44.152 --rc genhtml_function_coverage=1 00:07:44.152 --rc genhtml_legend=1 00:07:44.152 --rc geninfo_all_blocks=1 00:07:44.152 --rc geninfo_unexecuted_blocks=1 00:07:44.152 00:07:44.152 ' 00:07:44.152 19:26:11 version -- app/version.sh@17 -- # get_header_version major 00:07:44.152 19:26:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:44.152 19:26:11 version -- app/version.sh@14 -- # cut -f2 00:07:44.152 19:26:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.152 19:26:11 version -- app/version.sh@17 -- # major=25 00:07:44.152 19:26:11 version -- app/version.sh@18 -- # get_header_version minor 00:07:44.152 19:26:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:44.152 19:26:11 version -- app/version.sh@14 -- # cut -f2 00:07:44.152 19:26:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.152 19:26:11 version -- app/version.sh@18 -- # minor=1 00:07:44.152 19:26:11 version -- app/version.sh@19 -- # get_header_version patch 00:07:44.152 19:26:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:44.152 19:26:11 version -- app/version.sh@14 -- # cut -f2 00:07:44.152 19:26:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.152 19:26:11 version -- app/version.sh@19 -- # patch=0 00:07:44.152 19:26:11 version -- app/version.sh@20 -- # get_header_version suffix 00:07:44.152 19:26:11 version -- app/version.sh@14 -- # cut -f2 00:07:44.152 19:26:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:44.152 19:26:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.152 19:26:11 version -- app/version.sh@20 -- # suffix=-pre 00:07:44.152 19:26:11 version -- app/version.sh@22 -- # version=25.1 00:07:44.152 19:26:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:44.152 19:26:11 version -- app/version.sh@28 -- # version=25.1rc0 00:07:44.152 19:26:11 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:44.152 19:26:11 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:44.152 19:26:11 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:44.152 19:26:11 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:44.152 ************************************ 00:07:44.152 END TEST version 00:07:44.152 ************************************ 00:07:44.152 00:07:44.152 real 0m0.226s 00:07:44.152 user 0m0.122s 00:07:44.152 sys 0m0.124s 00:07:44.152 19:26:11 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.152 19:26:11 version -- common/autotest_common.sh@10 -- # set +x 00:07:44.152 19:26:11 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:44.152 19:26:11 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:44.152 19:26:11 -- spdk/autotest.sh@194 -- # uname -s 00:07:44.152 19:26:11 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:44.152 19:26:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:44.152 19:26:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:44.152 19:26:11 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:44.152 19:26:11 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:44.152 19:26:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:44.152 19:26:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.152 19:26:11 -- common/autotest_common.sh@10 -- # set +x 00:07:44.152 ************************************ 00:07:44.152 START TEST blockdev_nvme 00:07:44.152 ************************************ 00:07:44.152 19:26:11 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:44.152 * Looking for test storage... 00:07:44.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:44.152 19:26:11 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:44.152 19:26:11 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:44.152 19:26:11 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:44.412 19:26:11 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.412 19:26:11 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:44.412 19:26:11 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.412 19:26:11 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:44.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.412 --rc genhtml_branch_coverage=1 00:07:44.412 --rc genhtml_function_coverage=1 00:07:44.412 --rc genhtml_legend=1 00:07:44.412 --rc geninfo_all_blocks=1 00:07:44.412 --rc geninfo_unexecuted_blocks=1 00:07:44.412 00:07:44.412 ' 00:07:44.412 19:26:11 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:44.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.412 --rc genhtml_branch_coverage=1 00:07:44.412 --rc genhtml_function_coverage=1 00:07:44.412 --rc genhtml_legend=1 00:07:44.412 --rc geninfo_all_blocks=1 00:07:44.412 --rc geninfo_unexecuted_blocks=1 00:07:44.412 00:07:44.412 ' 00:07:44.412 19:26:11 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:44.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.412 --rc genhtml_branch_coverage=1 00:07:44.412 --rc genhtml_function_coverage=1 00:07:44.412 --rc genhtml_legend=1 00:07:44.412 --rc geninfo_all_blocks=1 00:07:44.412 --rc geninfo_unexecuted_blocks=1 00:07:44.412 00:07:44.412 ' 00:07:44.412 19:26:11 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:44.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.412 --rc genhtml_branch_coverage=1 00:07:44.412 --rc genhtml_function_coverage=1 00:07:44.412 --rc genhtml_legend=1 00:07:44.412 --rc geninfo_all_blocks=1 00:07:44.412 --rc geninfo_unexecuted_blocks=1 00:07:44.412 00:07:44.412 ' 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:44.412 19:26:11 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60141 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:44.412 19:26:11 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60141 00:07:44.412 19:26:11 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60141 ']' 00:07:44.412 19:26:11 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.412 19:26:11 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.413 19:26:11 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.413 19:26:11 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:44.413 19:26:11 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.413 19:26:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.413 [2024-12-05 19:26:11.556898] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:44.413 [2024-12-05 19:26:11.557211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60141 ] 00:07:44.673 [2024-12-05 19:26:11.719600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.673 [2024-12-05 19:26:11.840783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.642 19:26:12 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.642 19:26:12 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:45.642 19:26:12 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:45.642 19:26:12 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:07:45.642 19:26:12 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:45.642 19:26:12 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:45.642 19:26:12 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:45.642 19:26:12 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:45.642 19:26:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.642 19:26:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.642 19:26:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.642 19:26:12 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:45.642 19:26:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.642 19:26:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.905 19:26:12 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:07:45.905 19:26:12 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.905 19:26:12 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.905 19:26:12 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.905 19:26:12 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:45.905 19:26:12 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:45.905 19:26:12 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.905 19:26:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.905 19:26:12 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:45.905 19:26:12 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:45.906 19:26:12 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7e86e5dc-6159-4b19-ab04-fd98ac4c1d03"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7e86e5dc-6159-4b19-ab04-fd98ac4c1d03",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "fcd65065-8b84-4923-a04b-afc625feba3e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fcd65065-8b84-4923-a04b-afc625feba3e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ba29c889-b1be-4c19-88a5-c202b2b57cb9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ba29c889-b1be-4c19-88a5-c202b2b57cb9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "93573669-fd3f-4dfd-afc5-48ebff41c694"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "93573669-fd3f-4dfd-afc5-48ebff41c694",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ba108587-2bd4-4fe3-8640-124302199eeb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ba108587-2bd4-4fe3-8640-124302199eeb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "233569be-fb7e-460f-b941-f54334fe138c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "233569be-fb7e-460f-b941-f54334fe138c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:45.906 19:26:13 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:45.906 19:26:13 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:45.906 19:26:13 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:45.906 19:26:13 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60141 00:07:45.906 19:26:13 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60141 ']' 00:07:45.906 19:26:13 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60141 00:07:45.906 19:26:13 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:45.906 19:26:13 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.906 19:26:13 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60141 00:07:45.906 killing process with pid 60141 00:07:45.906 19:26:13 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.906 19:26:13 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.906 19:26:13 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60141' 00:07:45.906 19:26:13 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60141 00:07:45.906 19:26:13 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60141 00:07:47.819 19:26:14 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:47.819 19:26:14 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:47.819 19:26:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:47.819 19:26:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.819 19:26:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.819 ************************************ 00:07:47.819 START TEST bdev_hello_world 00:07:47.819 ************************************ 00:07:47.819 19:26:14 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:47.819 [2024-12-05 19:26:14.839796] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:47.819 [2024-12-05 19:26:14.840539] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60227 ] 00:07:47.819 [2024-12-05 19:26:15.014385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.079 [2024-12-05 19:26:15.151166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.651 [2024-12-05 19:26:15.755366] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:48.651 [2024-12-05 19:26:15.755452] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:48.651 [2024-12-05 19:26:15.755479] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:48.651 [2024-12-05 19:26:15.758376] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:48.651 [2024-12-05 19:26:15.759358] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:48.651 [2024-12-05 19:26:15.759399] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:48.651 [2024-12-05 19:26:15.760386] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:48.651 00:07:48.651 [2024-12-05 19:26:15.760425] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:49.597 ************************************ 00:07:49.597 END TEST bdev_hello_world 00:07:49.597 ************************************ 00:07:49.597 00:07:49.597 real 0m1.811s 00:07:49.597 user 0m1.449s 00:07:49.597 sys 0m0.246s 00:07:49.597 19:26:16 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.597 19:26:16 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:49.597 19:26:16 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:49.597 19:26:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.597 19:26:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.597 19:26:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.597 ************************************ 00:07:49.597 START TEST bdev_bounds 00:07:49.597 ************************************ 00:07:49.597 19:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:49.597 19:26:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60269 00:07:49.597 19:26:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:49.597 Process bdevio pid: 60269 00:07:49.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.597 19:26:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60269' 00:07:49.597 19:26:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60269 00:07:49.597 19:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60269 ']' 00:07:49.597 19:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.597 19:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.597 19:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.597 19:26:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:49.597 19:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.597 19:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:49.597 [2024-12-05 19:26:16.734882] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:49.597 [2024-12-05 19:26:16.735763] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60269 ] 00:07:49.859 [2024-12-05 19:26:16.902304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.859 [2024-12-05 19:26:17.044330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.859 [2024-12-05 19:26:17.044921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.859 [2024-12-05 19:26:17.045100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.816 19:26:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.816 19:26:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:50.816 19:26:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:50.816 I/O targets: 00:07:50.816 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:50.816 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:50.816 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:50.816 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:50.816 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:50.816 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:50.816 00:07:50.816 00:07:50.816 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.816 http://cunit.sourceforge.net/ 00:07:50.816 00:07:50.816 00:07:50.816 Suite: bdevio tests on: Nvme3n1 00:07:50.816 Test: blockdev write read block ...passed 00:07:50.816 Test: blockdev write zeroes read block ...passed 00:07:50.816 Test: blockdev write zeroes read no split ...passed 00:07:50.816 Test: blockdev write zeroes read split ...passed 00:07:50.816 Test: blockdev write zeroes read split partial ...passed 00:07:50.816 Test: blockdev reset ...[2024-12-05 19:26:17.858143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:50.816 [2024-12-05 19:26:17.864685] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:50.816 passed 00:07:50.816 Test: blockdev write read 8 blocks ...passed 00:07:50.816 Test: blockdev write read size > 128k ...passed 00:07:50.816 Test: blockdev write read invalid size ...passed 00:07:50.816 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:50.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:50.817 Test: blockdev write read max offset ...passed 00:07:50.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:50.817 Test: blockdev writev readv 8 blocks ...passed 00:07:50.817 Test: blockdev writev readv 30 x 1block ...passed 00:07:50.817 Test: blockdev writev readv block ...passed 00:07:50.817 Test: blockdev writev readv size > 128k ...passed 00:07:50.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:50.817 Test: blockdev comparev and writev ...[2024-12-05 19:26:17.887685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ace0a000 len:0x1000 00:07:50.817 [2024-12-05 19:26:17.887943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:50.817 passed 00:07:50.817 Test: blockdev nvme passthru rw ...passed 00:07:50.817 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:26:17.890134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:50.817 [2024-12-05 19:26:17.890196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:50.817 passed 00:07:50.817 Test: blockdev nvme admin passthru ...passed 00:07:50.817 Test: blockdev copy ...passed 00:07:50.817 Suite: bdevio tests on: Nvme2n3 00:07:50.817 Test: blockdev write read block ...passed 00:07:50.817 Test: blockdev write zeroes read block ...passed 00:07:50.817 Test: blockdev write zeroes read no split ...passed 00:07:50.817 Test: blockdev write zeroes read split ...passed 00:07:50.817 Test: blockdev write zeroes read split partial ...passed 00:07:50.817 Test: blockdev reset ...[2024-12-05 19:26:17.970905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:50.817 [2024-12-05 19:26:17.976653] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:50.817 passed 00:07:50.817 Test: blockdev write read 8 blocks ...passed 00:07:50.817 Test: blockdev write read size > 128k ...passed 00:07:50.817 Test: blockdev write read invalid size ...passed 00:07:50.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:50.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:50.817 Test: blockdev write read max offset ...passed 00:07:50.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:50.817 Test: blockdev writev readv 8 blocks ...passed 00:07:50.817 Test: blockdev writev readv 30 x 1block ...passed 00:07:50.817 Test: blockdev writev readv block ...passed 00:07:50.817 Test: blockdev writev readv size > 128k ...passed 00:07:50.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:50.817 Test: blockdev comparev and writev ...[2024-12-05 19:26:17.994361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1206000 len:0x1000 00:07:50.817 [2024-12-05 19:26:17.994428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:50.817 passed 00:07:50.817 Test: blockdev nvme passthru rw ...passed 00:07:50.817 Test: blockdev nvme passthru vendor specific ...passed 00:07:50.817 Test: blockdev nvme admin passthru ...[2024-12-05 19:26:17.996813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:50.817 [2024-12-05 19:26:17.996864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:50.817 passed 00:07:50.817 Test: blockdev copy ...passed 00:07:50.817 Suite: bdevio tests on: Nvme2n2 00:07:50.817 Test: blockdev write read block ...passed 00:07:50.817 Test: blockdev write zeroes read block ...passed 00:07:50.817 Test: blockdev write zeroes read no split ...passed 00:07:50.817 Test: blockdev write zeroes read split ...passed 00:07:51.075 Test: blockdev write zeroes read split partial ...passed 00:07:51.075 Test: blockdev reset ...[2024-12-05 19:26:18.073158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:51.075 [2024-12-05 19:26:18.077406] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:51.075 passed 00:07:51.075 Test: blockdev write read 8 blocks ...passed 00:07:51.075 Test: blockdev write read size > 128k ...passed 00:07:51.075 Test: blockdev write read invalid size ...passed 00:07:51.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.075 Test: blockdev write read max offset ...passed 00:07:51.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.075 Test: blockdev writev readv 8 blocks ...passed 00:07:51.075 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.075 Test: blockdev writev readv block ...passed 00:07:51.075 Test: blockdev writev readv size > 128k ...passed 00:07:51.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.075 Test: blockdev comparev and writev ...[2024-12-05 19:26:18.094233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be03c000 len:0x1000 00:07:51.075 [2024-12-05 19:26:18.094299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.075 passed 00:07:51.075 Test: blockdev nvme passthru rw ...passed 00:07:51.075 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:26:18.097331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:51.075 [2024-12-05 19:26:18.097378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:51.075 passed 00:07:51.075 Test: blockdev nvme admin passthru ...passed 00:07:51.075 Test: blockdev copy ...passed 00:07:51.075 Suite: bdevio tests on: Nvme2n1 00:07:51.075 Test: blockdev write read block ...passed 00:07:51.075 Test: blockdev write zeroes read block ...passed 00:07:51.075 Test: blockdev write zeroes read no split ...passed 00:07:51.075 Test: blockdev write zeroes read split ...passed 00:07:51.075 Test: blockdev write zeroes read split partial ...passed 00:07:51.075 Test: blockdev reset ...[2024-12-05 19:26:18.168231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:51.075 [2024-12-05 19:26:18.174256] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:51.075 Test: blockdev write read 8 blocks ...passeduccessful. 00:07:51.075 00:07:51.075 Test: blockdev write read size > 128k ...passed 00:07:51.075 Test: blockdev write read invalid size ...passed 00:07:51.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.075 Test: blockdev write read max offset ...passed 00:07:51.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.075 Test: blockdev writev readv 8 blocks ...passed 00:07:51.075 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.075 Test: blockdev writev readv block ...passed 00:07:51.075 Test: blockdev writev readv size > 128k ...passed 00:07:51.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.075 Test: blockdev comparev and writev ...[2024-12-05 19:26:18.192800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be038000 len:0x1000 00:07:51.075 [2024-12-05 19:26:18.193024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.075 passed 00:07:51.075 Test: blockdev nvme passthru rw ...passed 00:07:51.075 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:26:18.196227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:51.075 [2024-12-05 19:26:18.196394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:07:51.075 00:07:51.075 Test: blockdev nvme admin passthru ...passed 00:07:51.075 Test: blockdev copy ...passed 00:07:51.075 Suite: bdevio tests on: Nvme1n1 00:07:51.075 Test: blockdev write read block ...passed 00:07:51.075 Test: blockdev write zeroes read block ...passed 00:07:51.075 Test: blockdev write zeroes read no split ...passed 00:07:51.075 Test: blockdev write zeroes read split ...passed 00:07:51.075 Test: blockdev write zeroes read split partial ...passed 00:07:51.075 Test: blockdev reset ...[2024-12-05 19:26:18.262485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:51.075 [2024-12-05 19:26:18.267479] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:51.075 passed 00:07:51.075 Test: blockdev write read 8 blocks ...passed 00:07:51.075 Test: blockdev write read size > 128k ...passed 00:07:51.075 Test: blockdev write read invalid size ...passed 00:07:51.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.075 Test: blockdev write read max offset ...passed 00:07:51.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.075 Test: blockdev writev readv 8 blocks ...passed 00:07:51.075 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.075 Test: blockdev writev readv block ...passed 00:07:51.075 Test: blockdev writev readv size > 128k ...passed 00:07:51.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.075 Test: blockdev comparev and writev ...[2024-12-05 19:26:18.285355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be034000 len:0x1000 00:07:51.075 [2024-12-05 19:26:18.285423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.075 passed 00:07:51.075 Test: blockdev nvme passthru rw ...passed 00:07:51.075 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:26:18.287955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:51.075 [2024-12-05 19:26:18.288004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:51.075 passed 00:07:51.075 Test: blockdev nvme admin passthru ...passed 00:07:51.075 Test: blockdev copy ...passed 00:07:51.075 Suite: bdevio tests on: Nvme0n1 00:07:51.075 Test: blockdev write read block ...passed 00:07:51.075 Test: blockdev write zeroes read block ...passed 00:07:51.075 Test: blockdev write zeroes read no split ...passed 00:07:51.336 Test: blockdev write zeroes read split ...passed 00:07:51.336 Test: blockdev write zeroes read split partial ...passed 00:07:51.336 Test: blockdev reset ...[2024-12-05 19:26:18.357017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:51.336 [2024-12-05 19:26:18.360494] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:07:51.336 Test: blockdev write read 8 blocks ...uccessful. 00:07:51.336 passed 00:07:51.336 Test: blockdev write read size > 128k ...passed 00:07:51.336 Test: blockdev write read invalid size ...passed 00:07:51.336 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.336 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.336 Test: blockdev write read max offset ...passed 00:07:51.336 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.336 Test: blockdev writev readv 8 blocks ...passed 00:07:51.336 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.336 Test: blockdev writev readv block ...passed 00:07:51.336 Test: blockdev writev readv size > 128k ...passed 00:07:51.336 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.336 Test: blockdev comparev and writev ...passed 00:07:51.337 Test: blockdev nvme passthru rw ...[2024-12-05 19:26:18.379072] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:51.337 separate metadata which is not supported yet. 00:07:51.337 passed 00:07:51.337 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.337 Test: blockdev nvme admin passthru ...[2024-12-05 19:26:18.381293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:51.337 [2024-12-05 19:26:18.381351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:51.337 passed 00:07:51.337 Test: blockdev copy ...passed 00:07:51.337 00:07:51.337 Run Summary: Type Total Ran Passed Failed Inactive 00:07:51.337 suites 6 6 n/a 0 0 00:07:51.337 tests 138 138 138 0 0 00:07:51.337 asserts 893 893 893 0 n/a 00:07:51.337 00:07:51.337 Elapsed time = 1.529 seconds 00:07:51.337 0 00:07:51.337 19:26:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60269 00:07:51.337 19:26:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60269 ']' 00:07:51.337 19:26:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60269 00:07:51.337 19:26:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:51.337 19:26:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.337 19:26:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60269 00:07:51.337 19:26:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.337 19:26:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.337 19:26:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60269' 00:07:51.337 killing process with pid 60269 00:07:51.337 19:26:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60269 00:07:51.337 19:26:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60269 00:07:51.908 19:26:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:51.908 00:07:51.908 real 0m2.497s 00:07:51.908 user 0m6.214s 00:07:51.908 sys 0m0.410s 00:07:51.908 19:26:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.908 19:26:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:51.908 ************************************ 00:07:51.908 END TEST bdev_bounds 00:07:51.908 ************************************ 00:07:52.169 19:26:19 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:52.169 19:26:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:52.169 19:26:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.169 19:26:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 ************************************ 00:07:52.169 START TEST bdev_nbd 00:07:52.169 ************************************ 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60329 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60329 /var/tmp/spdk-nbd.sock 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60329 ']' 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:52.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 19:26:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:52.169 [2024-12-05 19:26:19.298647] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:07:52.169 [2024-12-05 19:26:19.299000] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.428 [2024-12-05 19:26:19.461969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.428 [2024-12-05 19:26:19.584771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:53.003 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.282 1+0 records in 00:07:53.282 1+0 records out 00:07:53.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00149918 s, 2.7 MB/s 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:53.282 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.543 1+0 records in 00:07:53.543 1+0 records out 00:07:53.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120102 s, 3.4 MB/s 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:53.543 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:53.804 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:53.804 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:53.804 19:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:53.804 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:53.804 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:53.804 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:53.804 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:53.804 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:53.804 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:53.804 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:53.804 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:53.804 19:26:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.804 1+0 records in 00:07:53.804 1+0 records out 00:07:53.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00168304 s, 2.4 MB/s 00:07:53.804 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.804 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:53.804 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.804 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:53.804 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:53.804 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.804 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:53.804 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.064 1+0 records in 00:07:54.064 1+0 records out 00:07:54.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138992 s, 2.9 MB/s 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:54.064 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.326 1+0 records in 00:07:54.326 1+0 records out 00:07:54.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100238 s, 4.1 MB/s 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:54.326 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:54.589 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.590 1+0 records in 00:07:54.590 1+0 records out 00:07:54.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000958183 s, 4.3 MB/s 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:54.590 19:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.851 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:54.851 { 00:07:54.851 "nbd_device": "/dev/nbd0", 00:07:54.851 "bdev_name": "Nvme0n1" 00:07:54.851 }, 00:07:54.851 { 00:07:54.851 "nbd_device": "/dev/nbd1", 00:07:54.851 "bdev_name": "Nvme1n1" 00:07:54.851 }, 00:07:54.851 { 00:07:54.851 "nbd_device": "/dev/nbd2", 00:07:54.851 "bdev_name": "Nvme2n1" 00:07:54.851 }, 00:07:54.851 { 00:07:54.851 "nbd_device": "/dev/nbd3", 00:07:54.851 "bdev_name": "Nvme2n2" 00:07:54.851 }, 00:07:54.851 { 00:07:54.851 "nbd_device": "/dev/nbd4", 00:07:54.851 "bdev_name": "Nvme2n3" 00:07:54.851 }, 00:07:54.851 { 00:07:54.851 "nbd_device": "/dev/nbd5", 00:07:54.851 "bdev_name": "Nvme3n1" 00:07:54.851 } 00:07:54.851 ]' 00:07:54.851 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:54.851 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:54.851 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:54.851 { 00:07:54.851 "nbd_device": "/dev/nbd0", 00:07:54.852 "bdev_name": "Nvme0n1" 00:07:54.852 }, 00:07:54.852 { 00:07:54.852 "nbd_device": "/dev/nbd1", 00:07:54.852 "bdev_name": "Nvme1n1" 00:07:54.852 }, 00:07:54.852 { 00:07:54.852 "nbd_device": "/dev/nbd2", 00:07:54.852 "bdev_name": "Nvme2n1" 00:07:54.852 }, 00:07:54.852 { 00:07:54.852 "nbd_device": "/dev/nbd3", 00:07:54.852 "bdev_name": "Nvme2n2" 00:07:54.852 }, 00:07:54.852 { 00:07:54.852 "nbd_device": "/dev/nbd4", 00:07:54.852 "bdev_name": "Nvme2n3" 00:07:54.852 }, 00:07:54.852 { 00:07:54.852 "nbd_device": "/dev/nbd5", 00:07:54.852 "bdev_name": "Nvme3n1" 00:07:54.852 } 00:07:54.852 ]' 00:07:54.852 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:54.852 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.852 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:54.852 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.852 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:54.852 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.852 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:55.112 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:55.112 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:55.112 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:55.112 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.112 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.112 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:55.112 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.112 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.112 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.112 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.685 19:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:55.944 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:55.944 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:55.944 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:55.944 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.944 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.944 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:55.944 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.944 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.944 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.944 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:56.225 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:56.225 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:56.225 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:56.225 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.225 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.225 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:56.225 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.225 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.225 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.225 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:56.484 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:56.484 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:56.484 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:56.484 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.484 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.484 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:56.484 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.484 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.484 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:56.484 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.484 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:56.744 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:56.745 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:56.745 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.745 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.745 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:56.745 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:56.745 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:56.745 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:56.745 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:56.745 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:56.745 19:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:57.004 /dev/nbd0 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.004 1+0 records in 00:07:57.004 1+0 records out 00:07:57.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00148899 s, 2.8 MB/s 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:57.004 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:57.326 /dev/nbd1 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.326 1+0 records in 00:07:57.326 1+0 records out 00:07:57.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00089044 s, 4.6 MB/s 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:57.326 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:57.588 /dev/nbd10 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.588 1+0 records in 00:07:57.588 1+0 records out 00:07:57.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000846691 s, 4.8 MB/s 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:57.588 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:57.588 /dev/nbd11 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.859 1+0 records in 00:07:57.859 1+0 records out 00:07:57.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133472 s, 3.1 MB/s 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:57.859 19:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:57.859 /dev/nbd12 00:07:57.859 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.120 1+0 records in 00:07:58.120 1+0 records out 00:07:58.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00148579 s, 2.8 MB/s 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:58.120 /dev/nbd13 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.120 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.379 1+0 records in 00:07:58.379 1+0 records out 00:07:58.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120751 s, 3.4 MB/s 00:07:58.379 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.379 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.379 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.379 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.380 19:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.380 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.380 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.380 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:58.380 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.380 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.380 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:58.380 { 00:07:58.380 "nbd_device": "/dev/nbd0", 00:07:58.380 "bdev_name": "Nvme0n1" 00:07:58.380 }, 00:07:58.380 { 00:07:58.380 "nbd_device": "/dev/nbd1", 00:07:58.380 "bdev_name": "Nvme1n1" 00:07:58.380 }, 00:07:58.380 { 00:07:58.380 "nbd_device": "/dev/nbd10", 00:07:58.380 "bdev_name": "Nvme2n1" 00:07:58.380 }, 00:07:58.380 { 00:07:58.380 "nbd_device": "/dev/nbd11", 00:07:58.380 "bdev_name": "Nvme2n2" 00:07:58.380 }, 00:07:58.380 { 00:07:58.380 "nbd_device": "/dev/nbd12", 00:07:58.380 "bdev_name": "Nvme2n3" 00:07:58.380 }, 00:07:58.380 { 00:07:58.380 "nbd_device": "/dev/nbd13", 00:07:58.380 "bdev_name": "Nvme3n1" 00:07:58.380 } 00:07:58.380 ]' 00:07:58.380 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:58.380 { 00:07:58.380 "nbd_device": "/dev/nbd0", 00:07:58.380 "bdev_name": "Nvme0n1" 00:07:58.380 }, 00:07:58.380 { 00:07:58.380 "nbd_device": "/dev/nbd1", 00:07:58.380 "bdev_name": "Nvme1n1" 00:07:58.380 }, 00:07:58.380 { 00:07:58.380 "nbd_device": "/dev/nbd10", 00:07:58.380 "bdev_name": "Nvme2n1" 00:07:58.380 }, 00:07:58.380 { 00:07:58.380 "nbd_device": "/dev/nbd11", 00:07:58.380 "bdev_name": "Nvme2n2" 00:07:58.380 }, 00:07:58.380 { 00:07:58.380 "nbd_device": "/dev/nbd12", 00:07:58.380 "bdev_name": "Nvme2n3" 00:07:58.380 }, 00:07:58.380 { 00:07:58.380 "nbd_device": "/dev/nbd13", 00:07:58.380 "bdev_name": "Nvme3n1" 00:07:58.380 } 00:07:58.380 ]' 00:07:58.380 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.652 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:58.652 /dev/nbd1 00:07:58.652 /dev/nbd10 00:07:58.652 /dev/nbd11 00:07:58.652 /dev/nbd12 00:07:58.652 /dev/nbd13' 00:07:58.652 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:58.652 /dev/nbd1 00:07:58.652 /dev/nbd10 00:07:58.652 /dev/nbd11 00:07:58.652 /dev/nbd12 00:07:58.652 /dev/nbd13' 00:07:58.652 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.652 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:58.652 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:58.652 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:58.652 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:58.652 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:58.652 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:58.652 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.652 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:58.652 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:58.653 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:58.653 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:58.653 256+0 records in 00:07:58.653 256+0 records out 00:07:58.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00916653 s, 114 MB/s 00:07:58.653 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.653 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:58.918 256+0 records in 00:07:58.918 256+0 records out 00:07:58.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.261407 s, 4.0 MB/s 00:07:58.918 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.918 19:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:59.179 256+0 records in 00:07:59.179 256+0 records out 00:07:59.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.272824 s, 3.8 MB/s 00:07:59.179 19:26:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.179 19:26:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:59.440 256+0 records in 00:07:59.440 256+0 records out 00:07:59.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.276931 s, 3.8 MB/s 00:07:59.440 19:26:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.440 19:26:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:59.702 256+0 records in 00:07:59.702 256+0 records out 00:07:59.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.293443 s, 3.6 MB/s 00:07:59.702 19:26:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.702 19:26:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:59.963 256+0 records in 00:07:59.963 256+0 records out 00:07:59.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.295027 s, 3.6 MB/s 00:07:59.964 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.964 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:00.225 256+0 records in 00:08:00.225 256+0 records out 00:08:00.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237866 s, 4.4 MB/s 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.225 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:00.487 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:00.487 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:00.488 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:00.488 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.488 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.488 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:00.488 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.488 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.488 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.488 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:00.794 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:00.794 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:00.794 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:00.794 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.794 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.794 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:00.794 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.794 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.794 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.794 19:26:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.056 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:01.319 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:01.319 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:01.319 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:01.319 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.319 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.319 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:01.319 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.319 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.319 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.319 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:01.582 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:01.582 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:01.582 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:01.582 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.582 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.582 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:01.582 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.582 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.582 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:01.582 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.582 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.844 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:01.844 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:01.844 19:26:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:01.844 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:02.104 malloc_lvol_verify 00:08:02.104 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:02.364 89ee5bc5-eae1-4c93-b4dc-d896f3049691 00:08:02.364 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:02.626 638c31dd-0f20-4b19-ae35-762fbf30a37d 00:08:02.626 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:02.626 /dev/nbd0 00:08:02.888 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:02.888 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:02.888 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:02.888 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:02.888 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:02.888 mke2fs 1.47.0 (5-Feb-2023) 00:08:02.888 Discarding device blocks: 0/4096 done 00:08:02.888 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:02.888 00:08:02.888 Allocating group tables: 0/1 done 00:08:02.888 Writing inode tables: 0/1 done 00:08:02.888 Creating journal (1024 blocks): done 00:08:02.888 Writing superblocks and filesystem accounting information: 0/1 done 00:08:02.888 00:08:02.888 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:02.888 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.888 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:02.888 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:02.888 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:02.888 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.888 19:26:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:02.888 19:26:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60329 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60329 ']' 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60329 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60329 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.150 killing process with pid 60329 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60329' 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60329 00:08:03.150 19:26:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60329 00:08:04.090 19:26:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:04.090 00:08:04.090 real 0m11.821s 00:08:04.090 user 0m15.798s 00:08:04.090 sys 0m4.034s 00:08:04.090 ************************************ 00:08:04.090 END TEST bdev_nbd 00:08:04.090 ************************************ 00:08:04.090 19:26:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.090 19:26:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:04.090 19:26:31 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:04.090 19:26:31 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:08:04.090 skipping fio tests on NVMe due to multi-ns failures. 00:08:04.090 19:26:31 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:04.090 19:26:31 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:04.090 19:26:31 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:04.090 19:26:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:04.090 19:26:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.090 19:26:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.090 ************************************ 00:08:04.090 START TEST bdev_verify 00:08:04.090 ************************************ 00:08:04.090 19:26:31 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:04.090 [2024-12-05 19:26:31.184612] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:08:04.090 [2024-12-05 19:26:31.184780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60728 ] 00:08:04.352 [2024-12-05 19:26:31.348600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.352 [2024-12-05 19:26:31.486808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.352 [2024-12-05 19:26:31.486808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.924 Running I/O for 5 seconds... 00:08:07.253 16960.00 IOPS, 66.25 MiB/s [2024-12-05T19:26:35.895Z] 17344.00 IOPS, 67.75 MiB/s [2024-12-05T19:26:36.837Z] 17386.67 IOPS, 67.92 MiB/s [2024-12-05T19:26:37.409Z] 17024.00 IOPS, 66.50 MiB/s [2024-12-05T19:26:37.409Z] 16768.00 IOPS, 65.50 MiB/s 00:08:10.154 Latency(us) 00:08:10.154 [2024-12-05T19:26:37.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.154 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.154 Verification LBA range: start 0x0 length 0xbd0bd 00:08:10.154 Nvme0n1 : 5.09 1307.91 5.11 0.00 0.00 97668.39 17341.83 163739.18 00:08:10.154 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.154 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:10.154 Nvme0n1 : 5.08 1436.40 5.61 0.00 0.00 88818.13 20164.92 108890.58 00:08:10.154 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.154 Verification LBA range: start 0x0 length 0xa0000 00:08:10.154 Nvme1n1 : 5.09 1307.54 5.11 0.00 0.00 97541.60 17845.96 153253.42 00:08:10.154 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.154 Verification LBA range: start 0xa0000 length 0xa0000 00:08:10.154 Nvme1n1 : 5.08 1435.95 5.61 0.00 0.00 88699.77 22181.42 103244.41 00:08:10.154 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.154 Verification LBA range: start 0x0 length 0x80000 00:08:10.154 Nvme2n1 : 5.09 1307.12 5.11 0.00 0.00 97121.19 16535.24 145994.04 00:08:10.154 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.154 Verification LBA range: start 0x80000 length 0x80000 00:08:10.154 Nvme2n1 : 5.08 1435.50 5.61 0.00 0.00 88331.76 23290.49 93565.24 00:08:10.154 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.154 Verification LBA range: start 0x0 length 0x80000 00:08:10.154 Nvme2n2 : 5.09 1306.75 5.10 0.00 0.00 96991.49 15426.17 137121.48 00:08:10.154 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.154 Verification LBA range: start 0x80000 length 0x80000 00:08:10.154 Nvme2n2 : 5.08 1435.02 5.61 0.00 0.00 88032.95 23391.31 90338.86 00:08:10.154 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.154 Verification LBA range: start 0x0 length 0x80000 00:08:10.154 Nvme2n3 : 5.10 1306.37 5.10 0.00 0.00 96787.56 12703.90 146800.64 00:08:10.154 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.154 Verification LBA range: start 0x80000 length 0x80000 00:08:10.154 Nvme2n3 : 5.10 1443.85 5.64 0.00 0.00 87402.93 7561.85 88725.66 00:08:10.154 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.154 Verification LBA range: start 0x0 length 0x20000 00:08:10.154 Nvme3n1 : 5.10 1305.86 5.10 0.00 0.00 96513.96 13107.20 162932.58 00:08:10.154 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.154 Verification LBA range: start 0x20000 length 0x20000 00:08:10.154 Nvme3n1 : 5.10 1442.87 5.64 0.00 0.00 87273.68 10687.41 93968.54 00:08:10.154 [2024-12-05T19:26:37.409Z] =================================================================================================================== 00:08:10.154 [2024-12-05T19:26:37.409Z] Total : 16471.14 64.34 0.00 0.00 92384.23 7561.85 163739.18 00:08:11.543 00:08:11.543 real 0m7.368s 00:08:11.543 user 0m13.578s 00:08:11.543 sys 0m0.311s 00:08:11.543 19:26:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.543 ************************************ 00:08:11.543 END TEST bdev_verify 00:08:11.543 ************************************ 00:08:11.543 19:26:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:11.543 19:26:38 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:11.543 19:26:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:11.543 19:26:38 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.543 19:26:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.543 ************************************ 00:08:11.543 START TEST bdev_verify_big_io 00:08:11.543 ************************************ 00:08:11.543 19:26:38 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:11.543 [2024-12-05 19:26:38.634201] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:08:11.543 [2024-12-05 19:26:38.634349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60821 ] 00:08:11.806 [2024-12-05 19:26:38.801697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:11.806 [2024-12-05 19:26:38.941411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.806 [2024-12-05 19:26:38.941436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.749 Running I/O for 5 seconds... 00:08:17.249 2223.00 IOPS, 138.94 MiB/s [2024-12-05T19:26:45.447Z] 3049.00 IOPS, 190.56 MiB/s [2024-12-05T19:26:45.447Z] 2583.33 IOPS, 161.46 MiB/s 00:08:18.192 Latency(us) 00:08:18.192 [2024-12-05T19:26:45.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.192 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.192 Verification LBA range: start 0x0 length 0xbd0b 00:08:18.192 Nvme0n1 : 5.53 157.25 9.83 0.00 0.00 795690.19 13208.02 800144.15 00:08:18.192 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.192 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:18.192 Nvme0n1 : 5.52 156.11 9.76 0.00 0.00 806504.18 12401.43 877577.45 00:08:18.192 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.192 Verification LBA range: start 0x0 length 0xa000 00:08:18.192 Nvme1n1 : 5.53 148.33 9.27 0.00 0.00 826959.87 13308.85 1284102.30 00:08:18.192 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.192 Verification LBA range: start 0xa000 length 0xa000 00:08:18.192 Nvme1n1 : 5.53 157.69 9.86 0.00 0.00 782112.84 15930.29 764653.88 00:08:18.192 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.192 Verification LBA range: start 0x0 length 0x8000 00:08:18.192 Nvme2n1 : 5.54 148.40 9.27 0.00 0.00 815607.23 13510.50 1309913.40 00:08:18.192 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.192 Verification LBA range: start 0x8000 length 0x8000 00:08:18.192 Nvme2n1 : 5.53 158.00 9.88 0.00 0.00 770381.51 17845.96 787238.60 00:08:18.192 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.192 Verification LBA range: start 0x0 length 0x8000 00:08:18.192 Nvme2n2 : 5.54 147.42 9.21 0.00 0.00 809358.00 13308.85 1348630.06 00:08:18.192 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.192 Verification LBA range: start 0x8000 length 0x8000 00:08:18.192 Nvme2n2 : 5.53 154.89 9.68 0.00 0.00 774759.64 17140.18 1000180.18 00:08:18.192 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.192 Verification LBA range: start 0x0 length 0x8000 00:08:18.192 Nvme2n3 : 5.54 148.06 9.25 0.00 0.00 795207.92 13712.15 1380893.93 00:08:18.192 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.192 Verification LBA range: start 0x8000 length 0x8000 00:08:18.192 Nvme2n3 : 5.54 154.07 9.63 0.00 0.00 767538.20 15325.34 1226027.32 00:08:18.192 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.192 Verification LBA range: start 0x0 length 0x2000 00:08:18.192 Nvme3n1 : 5.55 151.25 9.45 0.00 0.00 769104.35 14417.92 1393799.48 00:08:18.192 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.192 Verification LBA range: start 0x2000 length 0x2000 00:08:18.192 Nvme3n1 : 5.54 157.23 9.83 0.00 0.00 742303.87 16636.06 858219.13 00:08:18.192 [2024-12-05T19:26:45.447Z] =================================================================================================================== 00:08:18.192 [2024-12-05T19:26:45.447Z] Total : 1838.70 114.92 0.00 0.00 787575.66 12401.43 1393799.48 00:08:20.106 00:08:20.106 real 0m8.539s 00:08:20.106 user 0m15.941s 00:08:20.106 sys 0m0.301s 00:08:20.106 ************************************ 00:08:20.106 END TEST bdev_verify_big_io 00:08:20.106 ************************************ 00:08:20.106 19:26:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.106 19:26:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:20.106 19:26:47 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:20.106 19:26:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:20.106 19:26:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.106 19:26:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.106 ************************************ 00:08:20.106 START TEST bdev_write_zeroes 00:08:20.106 ************************************ 00:08:20.106 19:26:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:20.106 [2024-12-05 19:26:47.262177] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:08:20.106 [2024-12-05 19:26:47.262329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60930 ] 00:08:20.366 [2024-12-05 19:26:47.436448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.366 [2024-12-05 19:26:47.581560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.307 Running I/O for 1 seconds... 00:08:22.342 42982.00 IOPS, 167.90 MiB/s 00:08:22.342 Latency(us) 00:08:22.342 [2024-12-05T19:26:49.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.342 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.342 Nvme0n1 : 1.03 7153.74 27.94 0.00 0.00 17849.83 5217.67 34280.37 00:08:22.342 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.342 Nvme1n1 : 1.03 7170.55 28.01 0.00 0.00 17786.48 10233.70 27222.65 00:08:22.342 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.342 Nvme2n1 : 1.03 7162.33 27.98 0.00 0.00 17731.12 10384.94 25811.10 00:08:22.342 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.342 Nvme2n2 : 1.03 7154.14 27.95 0.00 0.00 17695.62 10435.35 26012.75 00:08:22.342 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.342 Nvme2n3 : 1.03 7145.94 27.91 0.00 0.00 17669.92 10334.52 26416.05 00:08:22.342 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.342 Nvme3n1 : 1.03 7137.76 27.88 0.00 0.00 17636.38 8922.98 27827.59 00:08:22.342 [2024-12-05T19:26:49.597Z] =================================================================================================================== 00:08:22.342 [2024-12-05T19:26:49.597Z] Total : 42924.45 167.67 0.00 0.00 17728.15 5217.67 34280.37 00:08:22.913 00:08:22.913 real 0m2.932s 00:08:22.913 user 0m2.501s 00:08:22.913 sys 0m0.303s 00:08:22.913 19:26:50 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.913 19:26:50 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:22.913 ************************************ 00:08:22.913 END TEST bdev_write_zeroes 00:08:22.913 ************************************ 00:08:22.913 19:26:50 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:22.913 19:26:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:22.913 19:26:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.913 19:26:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.173 ************************************ 00:08:23.173 START TEST bdev_json_nonenclosed 00:08:23.173 ************************************ 00:08:23.173 19:26:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:23.173 [2024-12-05 19:26:50.244551] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:08:23.173 [2024-12-05 19:26:50.244712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60983 ] 00:08:23.173 [2024-12-05 19:26:50.405571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.433 [2024-12-05 19:26:50.541825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.433 [2024-12-05 19:26:50.541982] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:23.434 [2024-12-05 19:26:50.542020] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:23.434 [2024-12-05 19:26:50.542037] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:23.695 00:08:23.695 real 0m0.603s 00:08:23.695 user 0m0.380s 00:08:23.695 sys 0m0.118s 00:08:23.695 19:26:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.695 ************************************ 00:08:23.695 END TEST bdev_json_nonenclosed 00:08:23.695 ************************************ 00:08:23.695 19:26:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:23.695 19:26:50 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:23.695 19:26:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:23.695 19:26:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.695 19:26:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.695 ************************************ 00:08:23.695 START TEST bdev_json_nonarray 00:08:23.695 ************************************ 00:08:23.695 19:26:50 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:23.695 [2024-12-05 19:26:50.922292] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:08:23.695 [2024-12-05 19:26:50.922454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61014 ] 00:08:23.960 [2024-12-05 19:26:51.089351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.224 [2024-12-05 19:26:51.234653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.224 [2024-12-05 19:26:51.234801] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:24.224 [2024-12-05 19:26:51.234821] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:24.224 [2024-12-05 19:26:51.234832] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.224 00:08:24.224 real 0m0.590s 00:08:24.224 user 0m0.360s 00:08:24.224 sys 0m0.123s 00:08:24.224 19:26:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.224 ************************************ 00:08:24.224 END TEST bdev_json_nonarray 00:08:24.224 ************************************ 00:08:24.224 19:26:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:24.487 19:26:51 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:08:24.488 19:26:51 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:08:24.488 19:26:51 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:08:24.488 19:26:51 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:24.488 19:26:51 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:08:24.488 19:26:51 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:24.488 19:26:51 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:24.488 19:26:51 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:24.488 19:26:51 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:24.488 19:26:51 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:24.488 19:26:51 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:24.488 00:08:24.488 real 0m40.196s 00:08:24.488 user 0m59.707s 00:08:24.488 sys 0m6.720s 00:08:24.488 19:26:51 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.488 ************************************ 00:08:24.488 END TEST blockdev_nvme 00:08:24.488 ************************************ 00:08:24.488 19:26:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.488 19:26:51 -- spdk/autotest.sh@209 -- # uname -s 00:08:24.488 19:26:51 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:24.488 19:26:51 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:24.488 19:26:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:24.488 19:26:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.488 19:26:51 -- common/autotest_common.sh@10 -- # set +x 00:08:24.488 ************************************ 00:08:24.488 START TEST blockdev_nvme_gpt 00:08:24.488 ************************************ 00:08:24.488 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:24.488 * Looking for test storage... 00:08:24.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:24.488 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:24.488 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:24.488 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:08:24.488 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.488 19:26:51 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:24.488 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.488 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:24.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.488 --rc genhtml_branch_coverage=1 00:08:24.488 --rc genhtml_function_coverage=1 00:08:24.488 --rc genhtml_legend=1 00:08:24.488 --rc geninfo_all_blocks=1 00:08:24.488 --rc geninfo_unexecuted_blocks=1 00:08:24.488 00:08:24.488 ' 00:08:24.488 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:24.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.488 --rc genhtml_branch_coverage=1 00:08:24.488 --rc genhtml_function_coverage=1 00:08:24.488 --rc genhtml_legend=1 00:08:24.488 --rc geninfo_all_blocks=1 00:08:24.488 --rc geninfo_unexecuted_blocks=1 00:08:24.488 00:08:24.488 ' 00:08:24.488 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:24.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.488 --rc genhtml_branch_coverage=1 00:08:24.488 --rc genhtml_function_coverage=1 00:08:24.488 --rc genhtml_legend=1 00:08:24.488 --rc geninfo_all_blocks=1 00:08:24.488 --rc geninfo_unexecuted_blocks=1 00:08:24.488 00:08:24.488 ' 00:08:24.488 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:24.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.488 --rc genhtml_branch_coverage=1 00:08:24.488 --rc genhtml_function_coverage=1 00:08:24.488 --rc genhtml_legend=1 00:08:24.488 --rc geninfo_all_blocks=1 00:08:24.488 --rc geninfo_unexecuted_blocks=1 00:08:24.488 00:08:24.488 ' 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61098 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:24.488 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61098 00:08:24.488 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61098 ']' 00:08:24.488 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.750 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.750 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.750 19:26:51 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:24.750 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.750 19:26:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.750 [2024-12-05 19:26:51.830084] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:08:24.750 [2024-12-05 19:26:51.830254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61098 ] 00:08:24.750 [2024-12-05 19:26:51.996284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.013 [2024-12-05 19:26:52.137741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.957 19:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.957 19:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:08:25.957 19:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:08:25.957 19:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:08:25.957 19:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:25.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:26.217 Waiting for block devices as requested 00:08:26.217 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:26.478 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:26.478 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:26.478 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:31.768 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:31.768 BYT; 00:08:31.768 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:31.768 BYT; 00:08:31.768 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:31.768 19:26:58 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:31.768 19:26:58 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:32.710 The operation has completed successfully. 00:08:32.710 19:26:59 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:34.099 The operation has completed successfully. 00:08:34.099 19:27:00 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:34.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:34.673 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:34.673 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:34.673 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:34.932 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:34.932 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:34.932 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.932 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:34.932 [] 00:08:34.932 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.932 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:34.932 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:34.932 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:34.932 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:34.932 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:34.932 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.932 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.189 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.189 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:08:35.189 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.189 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.189 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.189 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:08:35.189 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.189 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.189 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:08:35.451 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.451 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:08:35.451 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:08:35.452 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "1d67bcd4-c897-48fa-bc18-3a7091d76eb3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1d67bcd4-c897-48fa-bc18-3a7091d76eb3",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3d3c884b-de00-406b-aa00-574ef4763ae4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3d3c884b-de00-406b-aa00-574ef4763ae4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "830f6b25-84af-4ef2-ba43-941ebabc1cd9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "830f6b25-84af-4ef2-ba43-941ebabc1cd9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "7ee7cc38-44d9-4be2-93c7-4fda84c0dffd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7ee7cc38-44d9-4be2-93c7-4fda84c0dffd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4d651be3-4be8-4f42-9d61-a72cb0aca8fa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4d651be3-4be8-4f42-9d61-a72cb0aca8fa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:35.452 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:08:35.452 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:08:35.452 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:08:35.452 19:27:02 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 61098 00:08:35.452 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61098 ']' 00:08:35.452 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61098 00:08:35.452 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:08:35.452 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.452 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61098 00:08:35.452 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.452 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.452 killing process with pid 61098 00:08:35.452 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61098' 00:08:35.452 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61098 00:08:35.452 19:27:02 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61098 00:08:37.364 19:27:04 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:37.364 19:27:04 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:37.364 19:27:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:37.364 19:27:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.364 19:27:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.364 ************************************ 00:08:37.364 START TEST bdev_hello_world 00:08:37.364 ************************************ 00:08:37.365 19:27:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:37.365 [2024-12-05 19:27:04.310829] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:08:37.365 [2024-12-05 19:27:04.310993] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61722 ] 00:08:37.365 [2024-12-05 19:27:04.473124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.365 [2024-12-05 19:27:04.608256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.304 [2024-12-05 19:27:05.206114] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:38.304 [2024-12-05 19:27:05.206176] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:38.304 [2024-12-05 19:27:05.206206] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:38.304 [2024-12-05 19:27:05.208979] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:38.304 [2024-12-05 19:27:05.209792] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:38.304 [2024-12-05 19:27:05.209825] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:38.304 [2024-12-05 19:27:05.210093] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:38.304 00:08:38.304 [2024-12-05 19:27:05.210135] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:38.875 00:08:38.875 real 0m1.796s 00:08:38.875 user 0m1.432s 00:08:38.875 sys 0m0.252s 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:38.875 ************************************ 00:08:38.875 END TEST bdev_hello_world 00:08:38.875 ************************************ 00:08:38.875 19:27:06 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:38.875 19:27:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.875 19:27:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.875 19:27:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:38.875 ************************************ 00:08:38.875 START TEST bdev_bounds 00:08:38.875 ************************************ 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61759 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:38.875 Process bdevio pid: 61759 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61759' 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61759 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:38.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61759 ']' 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.875 19:27:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:39.134 [2024-12-05 19:27:06.187416] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:08:39.134 [2024-12-05 19:27:06.187581] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61759 ] 00:08:39.134 [2024-12-05 19:27:06.353265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.395 [2024-12-05 19:27:06.466770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.395 [2024-12-05 19:27:06.467322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.395 [2024-12-05 19:27:06.467433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.965 19:27:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.965 19:27:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:39.965 19:27:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:39.965 I/O targets: 00:08:39.965 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:39.965 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:39.965 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:39.965 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:39.965 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:39.965 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:39.965 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:39.965 00:08:39.965 00:08:39.965 CUnit - A unit testing framework for C - Version 2.1-3 00:08:39.965 http://cunit.sourceforge.net/ 00:08:39.965 00:08:39.965 00:08:39.965 Suite: bdevio tests on: Nvme3n1 00:08:39.965 Test: blockdev write read block ...passed 00:08:40.226 Test: blockdev write zeroes read block ...passed 00:08:40.226 Test: blockdev write zeroes read no split ...passed 00:08:40.226 Test: blockdev write zeroes read split ...passed 00:08:40.226 Test: blockdev write zeroes read split partial ...passed 00:08:40.226 Test: blockdev reset ...[2024-12-05 19:27:07.350758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:40.226 [2024-12-05 19:27:07.356037] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:40.226 passed 00:08:40.226 Test: blockdev write read 8 blocks ...passed 00:08:40.226 Test: blockdev write read size > 128k ...passed 00:08:40.226 Test: blockdev write read invalid size ...passed 00:08:40.226 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.226 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.226 Test: blockdev write read max offset ...passed 00:08:40.226 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.226 Test: blockdev writev readv 8 blocks ...passed 00:08:40.226 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.226 Test: blockdev writev readv block ...passed 00:08:40.226 Test: blockdev writev readv size > 128k ...passed 00:08:40.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.226 Test: blockdev comparev and writev ...[2024-12-05 19:27:07.379308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x291404000 len:0x1000 00:08:40.226 [2024-12-05 19:27:07.379531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.226 passed 00:08:40.226 Test: blockdev nvme passthru rw ...passed 00:08:40.226 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:27:07.382194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.226 [2024-12-05 19:27:07.382360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.226 passed 00:08:40.226 Test: blockdev nvme admin passthru ...passed 00:08:40.226 Test: blockdev copy ...passed 00:08:40.226 Suite: bdevio tests on: Nvme2n3 00:08:40.226 Test: blockdev write read block ...passed 00:08:40.226 Test: blockdev write zeroes read block ...passed 00:08:40.226 Test: blockdev write zeroes read no split ...passed 00:08:40.487 Test: blockdev write zeroes read split ...passed 00:08:40.487 Test: blockdev write zeroes read split partial ...passed 00:08:40.487 Test: blockdev reset ...[2024-12-05 19:27:07.518941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:40.487 [2024-12-05 19:27:07.524350] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:40.487 passed 00:08:40.487 Test: blockdev write read 8 blocks ...passed 00:08:40.487 Test: blockdev write read size > 128k ...passed 00:08:40.487 Test: blockdev write read invalid size ...passed 00:08:40.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.487 Test: blockdev write read max offset ...passed 00:08:40.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.487 Test: blockdev writev readv 8 blocks ...passed 00:08:40.487 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.487 Test: blockdev writev readv block ...passed 00:08:40.487 Test: blockdev writev readv size > 128k ...passed 00:08:40.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.487 Test: blockdev comparev and writev ...[2024-12-05 19:27:07.545473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x291402000 len:0x1000 00:08:40.487 [2024-12-05 19:27:07.545747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.487 passed 00:08:40.487 Test: blockdev nvme passthru rw ...passed 00:08:40.487 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:27:07.548795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.487 passed 00:08:40.487 Test: blockdev nvme admin passthru ...[2024-12-05 19:27:07.549091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.487 passed 00:08:40.487 Test: blockdev copy ...passed 00:08:40.487 Suite: bdevio tests on: Nvme2n2 00:08:40.487 Test: blockdev write read block ...passed 00:08:40.487 Test: blockdev write zeroes read block ...passed 00:08:40.487 Test: blockdev write zeroes read no split ...passed 00:08:40.487 Test: blockdev write zeroes read split ...passed 00:08:40.487 Test: blockdev write zeroes read split partial ...passed 00:08:40.487 Test: blockdev reset ...[2024-12-05 19:27:07.722606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:40.487 [2024-12-05 19:27:07.728020] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:40.487 passed 00:08:40.487 Test: blockdev write read 8 blocks ...passed 00:08:40.487 Test: blockdev write read size > 128k ...passed 00:08:40.487 Test: blockdev write read invalid size ...passed 00:08:40.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.487 Test: blockdev write read max offset ...passed 00:08:40.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.487 Test: blockdev writev readv 8 blocks ...passed 00:08:40.748 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.748 Test: blockdev writev readv block ...passed 00:08:40.748 Test: blockdev writev readv size > 128k ...passed 00:08:40.748 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.748 Test: blockdev comparev and writev ...[2024-12-05 19:27:07.749147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfe38000 len:0x1000 00:08:40.748 [2024-12-05 19:27:07.749499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.748 passed 00:08:40.748 Test: blockdev nvme passthru rw ...passed 00:08:40.748 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:27:07.752690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.748 [2024-12-05 19:27:07.752870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.748 passed 00:08:40.748 Test: blockdev nvme admin passthru ...passed 00:08:40.748 Test: blockdev copy ...passed 00:08:40.748 Suite: bdevio tests on: Nvme2n1 00:08:40.748 Test: blockdev write read block ...passed 00:08:40.748 Test: blockdev write zeroes read block ...passed 00:08:40.748 Test: blockdev write zeroes read no split ...passed 00:08:40.748 Test: blockdev write zeroes read split ...passed 00:08:40.748 Test: blockdev write zeroes read split partial ...passed 00:08:40.748 Test: blockdev reset ...[2024-12-05 19:27:07.876368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:40.748 [2024-12-05 19:27:07.880620] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:40.748 Test: blockdev write read 8 blocks ...uccessful. 00:08:40.748 passed 00:08:40.748 Test: blockdev write read size > 128k ...passed 00:08:40.748 Test: blockdev write read invalid size ...passed 00:08:40.748 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.748 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.748 Test: blockdev write read max offset ...passed 00:08:40.748 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.748 Test: blockdev writev readv 8 blocks ...passed 00:08:40.748 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.748 Test: blockdev writev readv block ...passed 00:08:40.748 Test: blockdev writev readv size > 128k ...passed 00:08:40.748 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.748 Test: blockdev comparev and writev ...[2024-12-05 19:27:07.901071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfe34000 len:0x1000 00:08:40.748 [2024-12-05 19:27:07.901134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.748 passed 00:08:40.748 Test: blockdev nvme passthru rw ...passed 00:08:40.748 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.748 Test: blockdev nvme admin passthru ...[2024-12-05 19:27:07.903848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.748 [2024-12-05 19:27:07.903897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.748 passed 00:08:40.748 Test: blockdev copy ...passed 00:08:40.748 Suite: bdevio tests on: Nvme1n1p2 00:08:40.748 Test: blockdev write read block ...passed 00:08:40.748 Test: blockdev write zeroes read block ...passed 00:08:40.748 Test: blockdev write zeroes read no split ...passed 00:08:40.748 Test: blockdev write zeroes read split ...passed 00:08:40.748 Test: blockdev write zeroes read split partial ...passed 00:08:40.748 Test: blockdev reset ...[2024-12-05 19:27:07.963288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:40.748 [2024-12-05 19:27:07.967974] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:08:40.748 Test: blockdev write read 8 blocks ...uccessful. 00:08:40.748 passed 00:08:40.748 Test: blockdev write read size > 128k ...passed 00:08:40.748 Test: blockdev write read invalid size ...passed 00:08:40.748 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.748 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.748 Test: blockdev write read max offset ...passed 00:08:40.748 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.748 Test: blockdev writev readv 8 blocks ...passed 00:08:40.748 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.748 Test: blockdev writev readv block ...passed 00:08:40.748 Test: blockdev writev readv size > 128k ...passed 00:08:40.748 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.748 Test: blockdev comparev and writev ...[2024-12-05 19:27:07.989354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2bfe30000 len:0x1000 00:08:40.748 [2024-12-05 19:27:07.989411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.748 passed 00:08:40.748 Test: blockdev nvme passthru rw ...passed 00:08:40.748 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.748 Test: blockdev nvme admin passthru ...passed 00:08:40.748 Test: blockdev copy ...passed 00:08:40.748 Suite: bdevio tests on: Nvme1n1p1 00:08:40.748 Test: blockdev write read block ...passed 00:08:40.748 Test: blockdev write zeroes read block ...passed 00:08:41.009 Test: blockdev write zeroes read no split ...passed 00:08:41.009 Test: blockdev write zeroes read split ...passed 00:08:41.009 Test: blockdev write zeroes read split partial ...passed 00:08:41.009 Test: blockdev reset ...[2024-12-05 19:27:08.048187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:41.009 [2024-12-05 19:27:08.052952] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:08:41.009 Test: blockdev write read 8 blocks ...uccessful. 00:08:41.009 passed 00:08:41.009 Test: blockdev write read size > 128k ...passed 00:08:41.009 Test: blockdev write read invalid size ...passed 00:08:41.009 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.009 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.009 Test: blockdev write read max offset ...passed 00:08:41.009 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.009 Test: blockdev writev readv 8 blocks ...passed 00:08:41.009 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.009 Test: blockdev writev readv block ...passed 00:08:41.009 Test: blockdev writev readv size > 128k ...passed 00:08:41.009 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.009 Test: blockdev comparev and writev ...[2024-12-05 19:27:08.074211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x291e0e000 len:0x1000 00:08:41.009 [2024-12-05 19:27:08.074270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:41.009 passed 00:08:41.009 Test: blockdev nvme passthru rw ...passed 00:08:41.009 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.009 Test: blockdev nvme admin passthru ...passed 00:08:41.009 Test: blockdev copy ...passed 00:08:41.009 Suite: bdevio tests on: Nvme0n1 00:08:41.009 Test: blockdev write read block ...passed 00:08:41.009 Test: blockdev write zeroes read block ...passed 00:08:41.009 Test: blockdev write zeroes read no split ...passed 00:08:41.009 Test: blockdev write zeroes read split ...passed 00:08:41.009 Test: blockdev write zeroes read split partial ...passed 00:08:41.009 Test: blockdev reset ...[2024-12-05 19:27:08.134142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:41.009 [2024-12-05 19:27:08.137387] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:41.009 passed 00:08:41.009 Test: blockdev write read 8 blocks ...passed 00:08:41.009 Test: blockdev write read size > 128k ...passed 00:08:41.009 Test: blockdev write read invalid size ...passed 00:08:41.009 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.009 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.009 Test: blockdev write read max offset ...passed 00:08:41.009 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.009 Test: blockdev writev readv 8 blocks ...passed 00:08:41.009 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.009 Test: blockdev writev readv block ...passed 00:08:41.009 Test: blockdev writev readv size > 128k ...passed 00:08:41.009 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.009 Test: blockdev comparev and writev ...passed 00:08:41.009 Test: blockdev nvme passthru rw ...[2024-12-05 19:27:08.155542] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:41.009 separate metadata which is not supported yet. 00:08:41.009 passed 00:08:41.009 Test: blockdev nvme passthru vendor specific ...passed 00:08:41.009 Test: blockdev nvme admin passthru ...[2024-12-05 19:27:08.157144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:41.009 [2024-12-05 19:27:08.157204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:41.009 passed 00:08:41.009 Test: blockdev copy ...passed 00:08:41.009 00:08:41.009 Run Summary: Type Total Ran Passed Failed Inactive 00:08:41.009 suites 7 7 n/a 0 0 00:08:41.009 tests 161 161 161 0 0 00:08:41.009 asserts 1025 1025 1025 0 n/a 00:08:41.009 00:08:41.009 Elapsed time = 2.230 seconds 00:08:41.009 0 00:08:41.009 19:27:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61759 00:08:41.009 19:27:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61759 ']' 00:08:41.009 19:27:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61759 00:08:41.009 19:27:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:41.010 19:27:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.010 19:27:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61759 00:08:41.010 19:27:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.010 19:27:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.010 19:27:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61759' 00:08:41.010 killing process with pid 61759 00:08:41.010 19:27:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61759 00:08:41.010 19:27:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61759 00:08:41.950 19:27:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:41.950 00:08:41.950 real 0m2.882s 00:08:41.950 user 0m7.081s 00:08:41.950 sys 0m0.377s 00:08:41.950 19:27:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.950 ************************************ 00:08:41.950 END TEST bdev_bounds 00:08:41.950 ************************************ 00:08:41.950 19:27:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:41.950 19:27:09 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:41.950 19:27:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:41.950 19:27:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.950 19:27:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:41.950 ************************************ 00:08:41.950 START TEST bdev_nbd 00:08:41.950 ************************************ 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:41.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61824 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61824 /var/tmp/spdk-nbd.sock 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61824 ']' 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.950 19:27:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:41.950 [2024-12-05 19:27:09.140230] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:08:41.950 [2024-12-05 19:27:09.140632] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.211 [2024-12-05 19:27:09.306282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.211 [2024-12-05 19:27:09.445828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.178 1+0 records in 00:08:43.178 1+0 records out 00:08:43.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672269 s, 6.1 MB/s 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.178 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.440 1+0 records in 00:08:43.440 1+0 records out 00:08:43.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00196607 s, 2.1 MB/s 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.440 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.701 1+0 records in 00:08:43.701 1+0 records out 00:08:43.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110161 s, 3.7 MB/s 00:08:43.701 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.702 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:43.702 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.702 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:43.702 19:27:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:43.702 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.702 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.702 19:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:43.961 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:43.961 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:43.961 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:43.961 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:43.961 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:43.961 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:43.961 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.962 1+0 records in 00:08:43.962 1+0 records out 00:08:43.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122311 s, 3.3 MB/s 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.962 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:44.221 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:44.221 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.222 1+0 records in 00:08:44.222 1+0 records out 00:08:44.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111622 s, 3.7 MB/s 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.222 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.480 1+0 records in 00:08:44.480 1+0 records out 00:08:44.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100382 s, 4.1 MB/s 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.480 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:44.481 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.481 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:44.481 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:44.481 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.481 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.481 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.741 1+0 records in 00:08:44.741 1+0 records out 00:08:44.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129764 s, 3.2 MB/s 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.741 19:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.003 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd0", 00:08:45.003 "bdev_name": "Nvme0n1" 00:08:45.003 }, 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd1", 00:08:45.003 "bdev_name": "Nvme1n1p1" 00:08:45.003 }, 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd2", 00:08:45.003 "bdev_name": "Nvme1n1p2" 00:08:45.003 }, 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd3", 00:08:45.003 "bdev_name": "Nvme2n1" 00:08:45.003 }, 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd4", 00:08:45.003 "bdev_name": "Nvme2n2" 00:08:45.003 }, 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd5", 00:08:45.003 "bdev_name": "Nvme2n3" 00:08:45.003 }, 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd6", 00:08:45.003 "bdev_name": "Nvme3n1" 00:08:45.003 } 00:08:45.003 ]' 00:08:45.003 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:45.003 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:45.003 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd0", 00:08:45.003 "bdev_name": "Nvme0n1" 00:08:45.003 }, 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd1", 00:08:45.003 "bdev_name": "Nvme1n1p1" 00:08:45.003 }, 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd2", 00:08:45.003 "bdev_name": "Nvme1n1p2" 00:08:45.003 }, 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd3", 00:08:45.003 "bdev_name": "Nvme2n1" 00:08:45.003 }, 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd4", 00:08:45.003 "bdev_name": "Nvme2n2" 00:08:45.003 }, 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd5", 00:08:45.003 "bdev_name": "Nvme2n3" 00:08:45.003 }, 00:08:45.003 { 00:08:45.003 "nbd_device": "/dev/nbd6", 00:08:45.003 "bdev_name": "Nvme3n1" 00:08:45.004 } 00:08:45.004 ]' 00:08:45.004 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:45.004 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.004 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:45.004 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:45.004 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:45.004 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.004 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:45.264 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:45.264 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:45.264 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:45.264 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.264 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.264 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:45.264 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.264 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.264 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.264 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:45.546 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:45.546 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:45.546 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:45.546 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.546 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.546 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:45.546 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.546 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.546 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.546 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:45.806 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:45.806 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:45.806 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:45.806 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.806 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.806 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:45.806 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.806 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.806 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.806 19:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.067 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:46.327 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:46.327 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:46.327 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:46.327 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.327 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.327 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:46.327 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:46.327 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.327 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.327 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:46.587 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:46.588 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:46.588 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:46.588 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.588 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.588 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:46.588 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:46.588 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.588 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:46.588 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.588 19:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.848 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:47.108 /dev/nbd0 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.108 1+0 records in 00:08:47.108 1+0 records out 00:08:47.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00165688 s, 2.5 MB/s 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.108 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:47.368 /dev/nbd1 00:08:47.368 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:47.368 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:47.368 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:47.368 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:47.368 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:47.368 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:47.368 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:47.368 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:47.368 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:47.368 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:47.368 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.368 1+0 records in 00:08:47.368 1+0 records out 00:08:47.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000983238 s, 4.2 MB/s 00:08:47.369 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.369 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:47.369 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.369 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:47.369 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:47.369 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.369 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.369 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:47.629 /dev/nbd10 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.629 1+0 records in 00:08:47.629 1+0 records out 00:08:47.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00147037 s, 2.8 MB/s 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.629 19:27:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:47.889 /dev/nbd11 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.889 1+0 records in 00:08:47.889 1+0 records out 00:08:47.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00143347 s, 2.9 MB/s 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.889 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:48.149 /dev/nbd12 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.149 1+0 records in 00:08:48.149 1+0 records out 00:08:48.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00165658 s, 2.5 MB/s 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:48.149 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:48.411 /dev/nbd13 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.411 1+0 records in 00:08:48.411 1+0 records out 00:08:48.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125653 s, 3.3 MB/s 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:48.411 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:48.671 /dev/nbd14 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.671 1+0 records in 00:08:48.671 1+0 records out 00:08:48.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103157 s, 4.0 MB/s 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.671 19:27:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd0", 00:08:48.930 "bdev_name": "Nvme0n1" 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd1", 00:08:48.930 "bdev_name": "Nvme1n1p1" 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd10", 00:08:48.930 "bdev_name": "Nvme1n1p2" 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd11", 00:08:48.930 "bdev_name": "Nvme2n1" 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd12", 00:08:48.930 "bdev_name": "Nvme2n2" 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd13", 00:08:48.930 "bdev_name": "Nvme2n3" 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd14", 00:08:48.930 "bdev_name": "Nvme3n1" 00:08:48.930 } 00:08:48.930 ]' 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd0", 00:08:48.930 "bdev_name": "Nvme0n1" 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd1", 00:08:48.930 "bdev_name": "Nvme1n1p1" 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd10", 00:08:48.930 "bdev_name": "Nvme1n1p2" 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd11", 00:08:48.930 "bdev_name": "Nvme2n1" 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd12", 00:08:48.930 "bdev_name": "Nvme2n2" 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd13", 00:08:48.930 "bdev_name": "Nvme2n3" 00:08:48.930 }, 00:08:48.930 { 00:08:48.930 "nbd_device": "/dev/nbd14", 00:08:48.930 "bdev_name": "Nvme3n1" 00:08:48.930 } 00:08:48.930 ]' 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:48.930 /dev/nbd1 00:08:48.930 /dev/nbd10 00:08:48.930 /dev/nbd11 00:08:48.930 /dev/nbd12 00:08:48.930 /dev/nbd13 00:08:48.930 /dev/nbd14' 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:48.930 /dev/nbd1 00:08:48.930 /dev/nbd10 00:08:48.930 /dev/nbd11 00:08:48.930 /dev/nbd12 00:08:48.930 /dev/nbd13 00:08:48.930 /dev/nbd14' 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:48.930 256+0 records in 00:08:48.930 256+0 records out 00:08:48.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123892 s, 84.6 MB/s 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.930 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:49.190 256+0 records in 00:08:49.190 256+0 records out 00:08:49.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.2415 s, 4.3 MB/s 00:08:49.190 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.190 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:49.760 256+0 records in 00:08:49.760 256+0 records out 00:08:49.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.286195 s, 3.7 MB/s 00:08:49.760 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.760 19:27:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:49.760 256+0 records in 00:08:49.760 256+0 records out 00:08:49.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.291501 s, 3.6 MB/s 00:08:49.760 19:27:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.760 19:27:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:50.343 256+0 records in 00:08:50.343 256+0 records out 00:08:50.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.285921 s, 3.7 MB/s 00:08:50.343 19:27:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:50.343 19:27:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:50.343 256+0 records in 00:08:50.343 256+0 records out 00:08:50.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.282794 s, 3.7 MB/s 00:08:50.343 19:27:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:50.343 19:27:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:50.915 256+0 records in 00:08:50.915 256+0 records out 00:08:50.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.286299 s, 3.7 MB/s 00:08:50.915 19:27:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:50.915 19:27:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:51.176 256+0 records in 00:08:51.176 256+0 records out 00:08:51.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.293298 s, 3.6 MB/s 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.176 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:51.436 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:51.436 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:51.436 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:51.436 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.436 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.436 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:51.436 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.436 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.436 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.436 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:51.695 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:51.695 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:51.695 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:51.695 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.695 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.695 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:51.695 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.695 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.695 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.695 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:51.958 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:51.958 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:51.958 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:51.958 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.958 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.958 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:51.958 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.958 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.958 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.958 19:27:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:51.958 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:51.958 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:51.958 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:51.958 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.958 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.958 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:51.958 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.958 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.958 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.958 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:52.218 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:52.218 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:52.218 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:52.218 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.218 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.218 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:52.218 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.218 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.218 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.218 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:52.495 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:52.495 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:52.495 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:52.495 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.495 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.495 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:52.495 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.495 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.495 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.495 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:52.757 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:52.757 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:52.757 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:52.757 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.757 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.757 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:52.757 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.757 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.757 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.757 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.757 19:27:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:53.018 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:53.279 malloc_lvol_verify 00:08:53.279 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:53.542 5f2efe7a-a072-491c-8959-6ae7759a63f9 00:08:53.542 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:53.803 68e4af04-6a1e-4b43-9340-b6ff838088d1 00:08:53.803 19:27:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:53.803 /dev/nbd0 00:08:53.803 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:53.803 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:53.803 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:53.803 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:53.803 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:54.064 mke2fs 1.47.0 (5-Feb-2023) 00:08:54.065 Discarding device blocks: 0/4096 done 00:08:54.065 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:54.065 00:08:54.065 Allocating group tables: 0/1 done 00:08:54.065 Writing inode tables: 0/1 done 00:08:54.065 Creating journal (1024 blocks): done 00:08:54.065 Writing superblocks and filesystem accounting information: 0/1 done 00:08:54.065 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61824 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61824 ']' 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61824 00:08:54.065 19:27:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:54.326 19:27:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.326 19:27:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61824 00:08:54.326 killing process with pid 61824 00:08:54.326 19:27:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.326 19:27:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.326 19:27:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61824' 00:08:54.326 19:27:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61824 00:08:54.326 19:27:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61824 00:08:55.271 19:27:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:55.271 00:08:55.271 real 0m13.155s 00:08:55.271 user 0m17.419s 00:08:55.271 sys 0m4.584s 00:08:55.271 19:27:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.271 19:27:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:55.271 ************************************ 00:08:55.271 END TEST bdev_nbd 00:08:55.271 ************************************ 00:08:55.271 skipping fio tests on NVMe due to multi-ns failures. 00:08:55.271 19:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:55.271 19:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:08:55.271 19:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:08:55.271 19:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:55.271 19:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:55.271 19:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:55.271 19:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:55.271 19:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.271 19:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.271 ************************************ 00:08:55.271 START TEST bdev_verify 00:08:55.271 ************************************ 00:08:55.271 19:27:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:55.271 [2024-12-05 19:27:22.349956] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:08:55.271 [2024-12-05 19:27:22.350088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62262 ] 00:08:55.271 [2024-12-05 19:27:22.512398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:55.635 [2024-12-05 19:27:22.621777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.635 [2024-12-05 19:27:22.621784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.209 Running I/O for 5 seconds... 00:08:58.531 19072.00 IOPS, 74.50 MiB/s [2024-12-05T19:27:26.729Z] 18432.00 IOPS, 72.00 MiB/s [2024-12-05T19:27:27.671Z] 18538.67 IOPS, 72.42 MiB/s [2024-12-05T19:27:28.611Z] 18784.00 IOPS, 73.38 MiB/s [2024-12-05T19:27:28.611Z] 18470.40 IOPS, 72.15 MiB/s 00:09:01.356 Latency(us) 00:09:01.356 [2024-12-05T19:27:28.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.356 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.356 Verification LBA range: start 0x0 length 0xbd0bd 00:09:01.356 Nvme0n1 : 5.07 1287.44 5.03 0.00 0.00 99100.83 20164.92 111310.38 00:09:01.356 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.356 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:01.356 Nvme0n1 : 5.07 1311.94 5.12 0.00 0.00 97341.86 16535.24 108890.58 00:09:01.356 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.356 Verification LBA range: start 0x0 length 0x4ff80 00:09:01.356 Nvme1n1p1 : 5.07 1286.78 5.03 0.00 0.00 98966.72 22584.71 112116.97 00:09:01.356 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.356 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:01.356 Nvme1n1p1 : 5.08 1310.49 5.12 0.00 0.00 97239.21 20164.92 102841.11 00:09:01.356 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.356 Verification LBA range: start 0x0 length 0x4ff7f 00:09:01.357 Nvme1n1p2 : 5.08 1285.55 5.02 0.00 0.00 98839.32 25105.33 114536.76 00:09:01.357 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.357 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:01.357 Nvme1n1p2 : 5.08 1309.02 5.11 0.00 0.00 97047.99 23492.14 97194.93 00:09:01.357 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.357 Verification LBA range: start 0x0 length 0x80000 00:09:01.357 Nvme2n1 : 5.08 1284.93 5.02 0.00 0.00 98753.71 27021.00 112923.57 00:09:01.357 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.357 Verification LBA range: start 0x80000 length 0x80000 00:09:01.357 Nvme2n1 : 5.09 1307.91 5.11 0.00 0.00 96927.30 26214.40 95178.44 00:09:01.357 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.357 Verification LBA range: start 0x0 length 0x80000 00:09:01.357 Nvme2n2 : 5.08 1283.85 5.02 0.00 0.00 98616.11 26012.75 113730.17 00:09:01.357 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.357 Verification LBA range: start 0x80000 length 0x80000 00:09:01.357 Nvme2n2 : 5.09 1306.89 5.11 0.00 0.00 96797.30 25407.80 97194.93 00:09:01.357 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.357 Verification LBA range: start 0x0 length 0x80000 00:09:01.357 Nvme2n3 : 5.09 1282.79 5.01 0.00 0.00 98524.67 22685.54 113730.17 00:09:01.357 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.357 Verification LBA range: start 0x80000 length 0x80000 00:09:01.357 Nvme2n3 : 5.09 1306.52 5.10 0.00 0.00 96655.28 21173.17 103244.41 00:09:01.357 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.357 Verification LBA range: start 0x0 length 0x20000 00:09:01.357 Nvme3n1 : 5.10 1292.80 5.05 0.00 0.00 97769.35 2495.41 113730.17 00:09:01.357 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.357 Verification LBA range: start 0x20000 length 0x20000 00:09:01.357 Nvme3n1 : 5.10 1306.12 5.10 0.00 0.00 96530.44 17442.66 108890.58 00:09:01.357 [2024-12-05T19:27:28.612Z] =================================================================================================================== 00:09:01.357 [2024-12-05T19:27:28.612Z] Total : 18163.04 70.95 0.00 0.00 97785.22 2495.41 114536.76 00:09:02.740 00:09:02.740 real 0m7.298s 00:09:02.740 user 0m13.563s 00:09:02.740 sys 0m0.247s 00:09:02.740 19:27:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.740 19:27:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:02.740 ************************************ 00:09:02.740 END TEST bdev_verify 00:09:02.740 ************************************ 00:09:02.740 19:27:29 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:02.740 19:27:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:02.740 19:27:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.740 19:27:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:02.740 ************************************ 00:09:02.740 START TEST bdev_verify_big_io 00:09:02.740 ************************************ 00:09:02.740 19:27:29 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:02.740 [2024-12-05 19:27:29.732840] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:09:02.740 [2024-12-05 19:27:29.732991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62355 ] 00:09:02.740 [2024-12-05 19:27:29.900438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:03.001 [2024-12-05 19:27:30.043950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.001 [2024-12-05 19:27:30.043952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.572 Running I/O for 5 seconds... 00:09:09.419 1634.00 IOPS, 102.12 MiB/s [2024-12-05T19:27:36.936Z] 2536.50 IOPS, 158.53 MiB/s [2024-12-05T19:27:37.198Z] 3011.00 IOPS, 188.19 MiB/s 00:09:09.943 Latency(us) 00:09:09.943 [2024-12-05T19:27:37.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.943 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x0 length 0xbd0b 00:09:09.943 Nvme0n1 : 5.83 110.09 6.88 0.00 0.00 1093394.27 24298.73 1148594.02 00:09:09.943 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:09.943 Nvme0n1 : 5.94 106.68 6.67 0.00 0.00 1127571.77 21979.77 1258291.20 00:09:09.943 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x0 length 0x4ff8 00:09:09.943 Nvme1n1p1 : 5.83 113.99 7.12 0.00 0.00 1052055.89 102034.51 1116330.14 00:09:09.943 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:09.943 Nvme1n1p1 : 5.94 105.30 6.58 0.00 0.00 1109590.03 89128.96 1232480.10 00:09:09.943 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x0 length 0x4ff7 00:09:09.943 Nvme1n1p2 : 5.90 103.01 6.44 0.00 0.00 1124843.19 120182.94 1677721.60 00:09:09.943 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:09.943 Nvme1n1p2 : 5.95 104.87 6.55 0.00 0.00 1101409.39 108083.99 1793871.56 00:09:09.943 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x0 length 0x8000 00:09:09.943 Nvme2n1 : 5.94 118.65 7.42 0.00 0.00 960646.35 65737.65 1103424.59 00:09:09.943 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x8000 length 0x8000 00:09:09.943 Nvme2n1 : 6.02 113.75 7.11 0.00 0.00 981674.08 68560.74 1316366.18 00:09:09.943 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x0 length 0x8000 00:09:09.943 Nvme2n2 : 5.94 123.65 7.73 0.00 0.00 905123.83 37708.41 948557.98 00:09:09.943 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x8000 length 0x8000 00:09:09.943 Nvme2n2 : 6.07 114.09 7.13 0.00 0.00 952637.96 30045.74 1871304.86 00:09:09.943 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x0 length 0x8000 00:09:09.943 Nvme2n3 : 5.99 128.24 8.01 0.00 0.00 845302.94 40128.20 980821.86 00:09:09.943 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x8000 length 0x8000 00:09:09.943 Nvme2n3 : 6.11 128.72 8.04 0.00 0.00 818994.76 16938.54 1380893.93 00:09:09.943 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x0 length 0x2000 00:09:09.943 Nvme3n1 : 6.05 143.90 8.99 0.00 0.00 733525.15 12149.37 1000180.18 00:09:09.943 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:09.943 Verification LBA range: start 0x2000 length 0x2000 00:09:09.943 Nvme3n1 : 6.16 153.06 9.57 0.00 0.00 676587.04 1020.85 1961643.72 00:09:09.943 [2024-12-05T19:27:37.198Z] =================================================================================================================== 00:09:09.943 [2024-12-05T19:27:37.198Z] Total : 1668.01 104.25 0.00 0.00 944127.21 1020.85 1961643.72 00:09:11.856 00:09:11.856 real 0m9.290s 00:09:11.856 user 0m17.381s 00:09:11.856 sys 0m0.351s 00:09:11.856 19:27:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.856 19:27:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:11.856 ************************************ 00:09:11.856 END TEST bdev_verify_big_io 00:09:11.856 ************************************ 00:09:11.856 19:27:38 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:11.856 19:27:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:11.856 19:27:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.856 19:27:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:11.856 ************************************ 00:09:11.856 START TEST bdev_write_zeroes 00:09:11.856 ************************************ 00:09:11.856 19:27:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:11.856 [2024-12-05 19:27:39.096301] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:09:11.856 [2024-12-05 19:27:39.096489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62470 ] 00:09:12.116 [2024-12-05 19:27:39.265063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.436 [2024-12-05 19:27:39.405403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.008 Running I/O for 1 seconds... 00:09:13.984 38976.00 IOPS, 152.25 MiB/s 00:09:13.984 Latency(us) 00:09:13.984 [2024-12-05T19:27:41.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.984 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:13.984 Nvme0n1 : 1.03 5579.18 21.79 0.00 0.00 22870.66 9981.64 41741.39 00:09:13.984 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:13.984 Nvme1n1p1 : 1.03 5570.49 21.76 0.00 0.00 22877.91 16232.76 43152.94 00:09:13.985 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:13.985 Nvme1n1p2 : 1.04 5562.66 21.73 0.00 0.00 22736.60 16131.94 38716.65 00:09:13.985 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:13.985 Nvme2n1 : 1.04 5556.04 21.70 0.00 0.00 22681.14 17140.18 37506.76 00:09:13.985 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:13.985 Nvme2n2 : 1.04 5549.63 21.68 0.00 0.00 22575.47 15426.17 35490.26 00:09:13.985 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:13.985 Nvme2n3 : 1.04 5543.23 21.65 0.00 0.00 22544.84 14317.10 33070.47 00:09:13.985 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:13.985 Nvme3n1 : 1.04 5536.25 21.63 0.00 0.00 22496.33 12905.55 32465.53 00:09:13.985 [2024-12-05T19:27:41.240Z] =================================================================================================================== 00:09:13.985 [2024-12-05T19:27:41.240Z] Total : 38897.50 151.94 0.00 0.00 22683.28 9981.64 43152.94 00:09:14.941 00:09:14.942 real 0m2.970s 00:09:14.942 user 0m2.563s 00:09:14.942 sys 0m0.275s 00:09:14.942 19:27:41 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.942 ************************************ 00:09:14.942 END TEST bdev_write_zeroes 00:09:14.942 ************************************ 00:09:14.942 19:27:41 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:14.942 19:27:42 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:14.942 19:27:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:14.942 19:27:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.942 19:27:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:14.942 ************************************ 00:09:14.942 START TEST bdev_json_nonenclosed 00:09:14.942 ************************************ 00:09:14.942 19:27:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:14.942 [2024-12-05 19:27:42.135126] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:09:14.942 [2024-12-05 19:27:42.135307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62518 ] 00:09:15.202 [2024-12-05 19:27:42.308350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.202 [2024-12-05 19:27:42.450465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.202 [2024-12-05 19:27:42.450621] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:15.202 [2024-12-05 19:27:42.450649] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:15.202 [2024-12-05 19:27:42.450664] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:15.459 00:09:15.459 real 0m0.634s 00:09:15.459 user 0m0.400s 00:09:15.459 sys 0m0.126s 00:09:15.459 19:27:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.459 19:27:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:15.459 ************************************ 00:09:15.459 END TEST bdev_json_nonenclosed 00:09:15.459 ************************************ 00:09:15.718 19:27:42 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:15.718 19:27:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:15.718 19:27:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.718 19:27:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:15.718 ************************************ 00:09:15.718 START TEST bdev_json_nonarray 00:09:15.718 ************************************ 00:09:15.718 19:27:42 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:15.718 [2024-12-05 19:27:42.837001] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:09:15.718 [2024-12-05 19:27:42.837161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62549 ] 00:09:15.980 [2024-12-05 19:27:43.004291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.980 [2024-12-05 19:27:43.152390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.980 [2024-12-05 19:27:43.152527] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:15.980 [2024-12-05 19:27:43.152549] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:15.980 [2024-12-05 19:27:43.152560] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:16.241 00:09:16.241 real 0m0.613s 00:09:16.241 user 0m0.366s 00:09:16.241 sys 0m0.139s 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:16.241 ************************************ 00:09:16.241 END TEST bdev_json_nonarray 00:09:16.241 ************************************ 00:09:16.241 19:27:43 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:09:16.241 19:27:43 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:09:16.241 19:27:43 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:16.241 19:27:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.241 19:27:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.241 19:27:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:16.241 ************************************ 00:09:16.241 START TEST bdev_gpt_uuid 00:09:16.241 ************************************ 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62580 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62580 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62580 ']' 00:09:16.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.241 19:27:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:16.505 [2024-12-05 19:27:43.538840] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:09:16.505 [2024-12-05 19:27:43.539008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62580 ] 00:09:16.505 [2024-12-05 19:27:43.702064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.767 [2024-12-05 19:27:43.839873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.365 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.365 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:09:17.365 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:17.365 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.365 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:17.961 Some configs were skipped because the RPC state that can call them passed over. 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:09:17.961 { 00:09:17.961 "name": "Nvme1n1p1", 00:09:17.961 "aliases": [ 00:09:17.961 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:17.961 ], 00:09:17.961 "product_name": "GPT Disk", 00:09:17.961 "block_size": 4096, 00:09:17.961 "num_blocks": 655104, 00:09:17.961 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:17.961 "assigned_rate_limits": { 00:09:17.961 "rw_ios_per_sec": 0, 00:09:17.961 "rw_mbytes_per_sec": 0, 00:09:17.961 "r_mbytes_per_sec": 0, 00:09:17.961 "w_mbytes_per_sec": 0 00:09:17.961 }, 00:09:17.961 "claimed": false, 00:09:17.961 "zoned": false, 00:09:17.961 "supported_io_types": { 00:09:17.961 "read": true, 00:09:17.961 "write": true, 00:09:17.961 "unmap": true, 00:09:17.961 "flush": true, 00:09:17.961 "reset": true, 00:09:17.961 "nvme_admin": false, 00:09:17.961 "nvme_io": false, 00:09:17.961 "nvme_io_md": false, 00:09:17.961 "write_zeroes": true, 00:09:17.961 "zcopy": false, 00:09:17.961 "get_zone_info": false, 00:09:17.961 "zone_management": false, 00:09:17.961 "zone_append": false, 00:09:17.961 "compare": true, 00:09:17.961 "compare_and_write": false, 00:09:17.961 "abort": true, 00:09:17.961 "seek_hole": false, 00:09:17.961 "seek_data": false, 00:09:17.961 "copy": true, 00:09:17.961 "nvme_iov_md": false 00:09:17.961 }, 00:09:17.961 "driver_specific": { 00:09:17.961 "gpt": { 00:09:17.961 "base_bdev": "Nvme1n1", 00:09:17.961 "offset_blocks": 256, 00:09:17.961 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:17.961 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:17.961 "partition_name": "SPDK_TEST_first" 00:09:17.961 } 00:09:17.961 } 00:09:17.961 } 00:09:17.961 ]' 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:17.961 19:27:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:17.961 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:17.961 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:17.961 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.961 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:17.961 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.961 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:09:17.961 { 00:09:17.961 "name": "Nvme1n1p2", 00:09:17.961 "aliases": [ 00:09:17.961 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:17.961 ], 00:09:17.961 "product_name": "GPT Disk", 00:09:17.961 "block_size": 4096, 00:09:17.961 "num_blocks": 655103, 00:09:17.961 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:17.961 "assigned_rate_limits": { 00:09:17.961 "rw_ios_per_sec": 0, 00:09:17.961 "rw_mbytes_per_sec": 0, 00:09:17.961 "r_mbytes_per_sec": 0, 00:09:17.961 "w_mbytes_per_sec": 0 00:09:17.961 }, 00:09:17.961 "claimed": false, 00:09:17.961 "zoned": false, 00:09:17.961 "supported_io_types": { 00:09:17.961 "read": true, 00:09:17.961 "write": true, 00:09:17.961 "unmap": true, 00:09:17.961 "flush": true, 00:09:17.961 "reset": true, 00:09:17.961 "nvme_admin": false, 00:09:17.961 "nvme_io": false, 00:09:17.961 "nvme_io_md": false, 00:09:17.961 "write_zeroes": true, 00:09:17.961 "zcopy": false, 00:09:17.961 "get_zone_info": false, 00:09:17.961 "zone_management": false, 00:09:17.961 "zone_append": false, 00:09:17.961 "compare": true, 00:09:17.961 "compare_and_write": false, 00:09:17.961 "abort": true, 00:09:17.961 "seek_hole": false, 00:09:17.961 "seek_data": false, 00:09:17.961 "copy": true, 00:09:17.961 "nvme_iov_md": false 00:09:17.961 }, 00:09:17.961 "driver_specific": { 00:09:17.961 "gpt": { 00:09:17.961 "base_bdev": "Nvme1n1", 00:09:17.961 "offset_blocks": 655360, 00:09:17.961 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:17.961 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:17.961 "partition_name": "SPDK_TEST_second" 00:09:17.961 } 00:09:17.961 } 00:09:17.961 } 00:09:17.961 ]' 00:09:17.961 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:09:17.961 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:09:17.961 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:09:17.961 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:17.961 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:17.961 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:17.962 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62580 00:09:17.962 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62580 ']' 00:09:17.962 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62580 00:09:17.962 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:09:17.962 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.962 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62580 00:09:17.962 killing process with pid 62580 00:09:17.962 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.962 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.962 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62580' 00:09:17.962 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62580 00:09:17.962 19:27:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62580 00:09:19.875 00:09:19.875 real 0m3.438s 00:09:19.875 user 0m3.501s 00:09:19.875 sys 0m0.498s 00:09:19.875 19:27:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.875 ************************************ 00:09:19.875 END TEST bdev_gpt_uuid 00:09:19.875 ************************************ 00:09:19.875 19:27:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:19.875 19:27:46 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:09:19.875 19:27:46 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:09:19.875 19:27:46 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:09:19.875 19:27:46 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:19.875 19:27:46 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:19.875 19:27:46 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:19.875 19:27:46 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:19.875 19:27:46 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:19.875 19:27:46 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:20.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:20.398 Waiting for block devices as requested 00:09:20.398 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:20.398 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:20.659 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:20.659 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.954 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:25.954 19:27:52 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:25.954 19:27:52 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:25.954 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:25.954 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:25.954 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:25.954 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:25.954 19:27:53 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:25.954 00:09:25.954 real 1m1.565s 00:09:25.954 user 1m17.181s 00:09:25.954 sys 0m9.735s 00:09:25.954 19:27:53 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.954 ************************************ 00:09:25.954 END TEST blockdev_nvme_gpt 00:09:25.954 ************************************ 00:09:25.954 19:27:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:25.954 19:27:53 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:25.954 19:27:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.954 19:27:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.954 19:27:53 -- common/autotest_common.sh@10 -- # set +x 00:09:25.954 ************************************ 00:09:25.954 START TEST nvme 00:09:25.954 ************************************ 00:09:25.954 19:27:53 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:26.214 * Looking for test storage... 00:09:26.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:26.214 19:27:53 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.214 19:27:53 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.214 19:27:53 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.214 19:27:53 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.214 19:27:53 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.214 19:27:53 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.214 19:27:53 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.214 19:27:53 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.214 19:27:53 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.214 19:27:53 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.215 19:27:53 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.215 19:27:53 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.215 19:27:53 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.215 19:27:53 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.215 19:27:53 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.215 19:27:53 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:26.215 19:27:53 nvme -- scripts/common.sh@345 -- # : 1 00:09:26.215 19:27:53 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.215 19:27:53 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.215 19:27:53 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:26.215 19:27:53 nvme -- scripts/common.sh@353 -- # local d=1 00:09:26.215 19:27:53 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.215 19:27:53 nvme -- scripts/common.sh@355 -- # echo 1 00:09:26.215 19:27:53 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.215 19:27:53 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:26.215 19:27:53 nvme -- scripts/common.sh@353 -- # local d=2 00:09:26.215 19:27:53 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.215 19:27:53 nvme -- scripts/common.sh@355 -- # echo 2 00:09:26.215 19:27:53 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.215 19:27:53 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.215 19:27:53 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.215 19:27:53 nvme -- scripts/common.sh@368 -- # return 0 00:09:26.215 19:27:53 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.215 19:27:53 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:26.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.215 --rc genhtml_branch_coverage=1 00:09:26.215 --rc genhtml_function_coverage=1 00:09:26.215 --rc genhtml_legend=1 00:09:26.215 --rc geninfo_all_blocks=1 00:09:26.215 --rc geninfo_unexecuted_blocks=1 00:09:26.215 00:09:26.215 ' 00:09:26.215 19:27:53 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:26.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.215 --rc genhtml_branch_coverage=1 00:09:26.215 --rc genhtml_function_coverage=1 00:09:26.215 --rc genhtml_legend=1 00:09:26.215 --rc geninfo_all_blocks=1 00:09:26.215 --rc geninfo_unexecuted_blocks=1 00:09:26.215 00:09:26.215 ' 00:09:26.215 19:27:53 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:26.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.215 --rc genhtml_branch_coverage=1 00:09:26.215 --rc genhtml_function_coverage=1 00:09:26.215 --rc genhtml_legend=1 00:09:26.215 --rc geninfo_all_blocks=1 00:09:26.215 --rc geninfo_unexecuted_blocks=1 00:09:26.215 00:09:26.215 ' 00:09:26.215 19:27:53 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:26.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.215 --rc genhtml_branch_coverage=1 00:09:26.215 --rc genhtml_function_coverage=1 00:09:26.215 --rc genhtml_legend=1 00:09:26.215 --rc geninfo_all_blocks=1 00:09:26.215 --rc geninfo_unexecuted_blocks=1 00:09:26.215 00:09:26.215 ' 00:09:26.215 19:27:53 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:26.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.357 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.357 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.357 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.357 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.357 19:27:54 nvme -- nvme/nvme.sh@79 -- # uname 00:09:27.357 19:27:54 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:27.357 19:27:54 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:27.357 19:27:54 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:27.357 19:27:54 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:27.357 19:27:54 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:09:27.357 19:27:54 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:09:27.357 19:27:54 nvme -- common/autotest_common.sh@1075 -- # stubpid=63213 00:09:27.357 Waiting for stub to ready for secondary processes... 00:09:27.357 19:27:54 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:09:27.357 19:27:54 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:27.357 19:27:54 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:27.357 19:27:54 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63213 ]] 00:09:27.357 19:27:54 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:27.357 [2024-12-05 19:27:54.514109] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:09:27.357 [2024-12-05 19:27:54.514226] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:28.359 [2024-12-05 19:27:55.311735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:28.359 [2024-12-05 19:27:55.407442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.359 [2024-12-05 19:27:55.407779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.359 [2024-12-05 19:27:55.407930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.359 [2024-12-05 19:27:55.422549] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:28.359 [2024-12-05 19:27:55.422593] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:28.359 [2024-12-05 19:27:55.438215] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:28.359 [2024-12-05 19:27:55.438314] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:28.359 [2024-12-05 19:27:55.440253] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:28.359 [2024-12-05 19:27:55.440848] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:28.359 [2024-12-05 19:27:55.440893] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:28.359 [2024-12-05 19:27:55.442534] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:28.359 [2024-12-05 19:27:55.443325] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:28.359 [2024-12-05 19:27:55.443375] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:28.359 [2024-12-05 19:27:55.445454] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:28.359 [2024-12-05 19:27:55.445638] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:28.359 [2024-12-05 19:27:55.445686] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:28.359 [2024-12-05 19:27:55.445716] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:28.359 [2024-12-05 19:27:55.445743] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:28.359 done. 00:09:28.359 19:27:55 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:28.359 19:27:55 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:09:28.359 19:27:55 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:28.359 19:27:55 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:09:28.359 19:27:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.359 19:27:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.359 ************************************ 00:09:28.359 START TEST nvme_reset 00:09:28.359 ************************************ 00:09:28.359 19:27:55 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:28.619 Initializing NVMe Controllers 00:09:28.619 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:28.619 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:28.619 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:28.619 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:28.619 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:28.619 00:09:28.619 real 0m0.225s 00:09:28.619 user 0m0.070s 00:09:28.619 sys 0m0.108s 00:09:28.619 19:27:55 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.619 ************************************ 00:09:28.619 END TEST nvme_reset 00:09:28.619 ************************************ 00:09:28.619 19:27:55 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:28.619 19:27:55 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:28.619 19:27:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.619 19:27:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.619 19:27:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.619 ************************************ 00:09:28.619 START TEST nvme_identify 00:09:28.619 ************************************ 00:09:28.619 19:27:55 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:09:28.619 19:27:55 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:28.619 19:27:55 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:28.619 19:27:55 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:28.619 19:27:55 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:28.619 19:27:55 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:28.619 19:27:55 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:09:28.619 19:27:55 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:28.619 19:27:55 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:28.619 19:27:55 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:28.619 19:27:55 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:28.619 19:27:55 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:28.619 19:27:55 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:28.883 [2024-12-05 19:27:56.028476] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63235 terminated unexpected 00:09:28.883 ===================================================== 00:09:28.883 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:28.883 ===================================================== 00:09:28.883 Controller Capabilities/Features 00:09:28.883 ================================ 00:09:28.883 Vendor ID: 1b36 00:09:28.883 Subsystem Vendor ID: 1af4 00:09:28.883 Serial Number: 12343 00:09:28.883 Model Number: QEMU NVMe Ctrl 00:09:28.883 Firmware Version: 8.0.0 00:09:28.883 Recommended Arb Burst: 6 00:09:28.883 IEEE OUI Identifier: 00 54 52 00:09:28.883 Multi-path I/O 00:09:28.883 May have multiple subsystem ports: No 00:09:28.883 May have multiple controllers: Yes 00:09:28.883 Associated with SR-IOV VF: No 00:09:28.883 Max Data Transfer Size: 524288 00:09:28.883 Max Number of Namespaces: 256 00:09:28.883 Max Number of I/O Queues: 64 00:09:28.883 NVMe Specification Version (VS): 1.4 00:09:28.883 NVMe Specification Version (Identify): 1.4 00:09:28.883 Maximum Queue Entries: 2048 00:09:28.883 Contiguous Queues Required: Yes 00:09:28.883 Arbitration Mechanisms Supported 00:09:28.883 Weighted Round Robin: Not Supported 00:09:28.883 Vendor Specific: Not Supported 00:09:28.883 Reset Timeout: 7500 ms 00:09:28.883 Doorbell Stride: 4 bytes 00:09:28.883 NVM Subsystem Reset: Not Supported 00:09:28.883 Command Sets Supported 00:09:28.883 NVM Command Set: Supported 00:09:28.883 Boot Partition: Not Supported 00:09:28.883 Memory Page Size Minimum: 4096 bytes 00:09:28.883 Memory Page Size Maximum: 65536 bytes 00:09:28.883 Persistent Memory Region: Not Supported 00:09:28.883 Optional Asynchronous Events Supported 00:09:28.883 Namespace Attribute Notices: Supported 00:09:28.883 Firmware Activation Notices: Not Supported 00:09:28.883 ANA Change Notices: Not Supported 00:09:28.883 PLE Aggregate Log Change Notices: Not Supported 00:09:28.883 LBA Status Info Alert Notices: Not Supported 00:09:28.883 EGE Aggregate Log Change Notices: Not Supported 00:09:28.883 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.883 Zone Descriptor Change Notices: Not Supported 00:09:28.883 Discovery Log Change Notices: Not Supported 00:09:28.883 Controller Attributes 00:09:28.883 128-bit Host Identifier: Not Supported 00:09:28.883 Non-Operational Permissive Mode: Not Supported 00:09:28.883 NVM Sets: Not Supported 00:09:28.883 Read Recovery Levels: Not Supported 00:09:28.883 Endurance Groups: Supported 00:09:28.883 Predictable Latency Mode: Not Supported 00:09:28.883 Traffic Based Keep ALive: Not Supported 00:09:28.883 Namespace Granularity: Not Supported 00:09:28.883 SQ Associations: Not Supported 00:09:28.883 UUID List: Not Supported 00:09:28.883 Multi-Domain Subsystem: Not Supported 00:09:28.883 Fixed Capacity Management: Not Supported 00:09:28.883 Variable Capacity Management: Not Supported 00:09:28.883 Delete Endurance Group: Not Supported 00:09:28.883 Delete NVM Set: Not Supported 00:09:28.883 Extended LBA Formats Supported: Supported 00:09:28.883 Flexible Data Placement Supported: Supported 00:09:28.883 00:09:28.883 Controller Memory Buffer Support 00:09:28.883 ================================ 00:09:28.883 Supported: No 00:09:28.883 00:09:28.883 Persistent Memory Region Support 00:09:28.883 ================================ 00:09:28.883 Supported: No 00:09:28.883 00:09:28.883 Admin Command Set Attributes 00:09:28.883 ============================ 00:09:28.883 Security Send/Receive: Not Supported 00:09:28.883 Format NVM: Supported 00:09:28.883 Firmware Activate/Download: Not Supported 00:09:28.883 Namespace Management: Supported 00:09:28.883 Device Self-Test: Not Supported 00:09:28.883 Directives: Supported 00:09:28.883 NVMe-MI: Not Supported 00:09:28.883 Virtualization Management: Not Supported 00:09:28.883 Doorbell Buffer Config: Supported 00:09:28.883 Get LBA Status Capability: Not Supported 00:09:28.883 Command & Feature Lockdown Capability: Not Supported 00:09:28.883 Abort Command Limit: 4 00:09:28.883 Async Event Request Limit: 4 00:09:28.883 Number of Firmware Slots: N/A 00:09:28.883 Firmware Slot 1 Read-Only: N/A 00:09:28.883 Firmware Activation Without Reset: N/A 00:09:28.883 Multiple Update Detection Support: N/A 00:09:28.883 Firmware Update Granularity: No Information Provided 00:09:28.883 Per-Namespace SMART Log: Yes 00:09:28.883 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.883 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:28.883 Command Effects Log Page: Supported 00:09:28.883 Get Log Page Extended Data: Supported 00:09:28.883 Telemetry Log Pages: Not Supported 00:09:28.883 Persistent Event Log Pages: Not Supported 00:09:28.883 Supported Log Pages Log Page: May Support 00:09:28.883 Commands Supported & Effects Log Page: Not Supported 00:09:28.883 Feature Identifiers & Effects Log Page:May Support 00:09:28.883 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.883 Data Area 4 for Telemetry Log: Not Supported 00:09:28.883 Error Log Page Entries Supported: 1 00:09:28.883 Keep Alive: Not Supported 00:09:28.883 00:09:28.883 NVM Command Set Attributes 00:09:28.883 ========================== 00:09:28.883 Submission Queue Entry Size 00:09:28.883 Max: 64 00:09:28.883 Min: 64 00:09:28.883 Completion Queue Entry Size 00:09:28.883 Max: 16 00:09:28.883 Min: 16 00:09:28.883 Number of Namespaces: 256 00:09:28.883 Compare Command: Supported 00:09:28.883 Write Uncorrectable Command: Not Supported 00:09:28.884 Dataset Management Command: Supported 00:09:28.884 Write Zeroes Command: Supported 00:09:28.884 Set Features Save Field: Supported 00:09:28.884 Reservations: Not Supported 00:09:28.884 Timestamp: Supported 00:09:28.884 Copy: Supported 00:09:28.884 Volatile Write Cache: Present 00:09:28.884 Atomic Write Unit (Normal): 1 00:09:28.884 Atomic Write Unit (PFail): 1 00:09:28.884 Atomic Compare & Write Unit: 1 00:09:28.884 Fused Compare & Write: Not Supported 00:09:28.884 Scatter-Gather List 00:09:28.884 SGL Command Set: Supported 00:09:28.884 SGL Keyed: Not Supported 00:09:28.884 SGL Bit Bucket Descriptor: Not Supported 00:09:28.884 SGL Metadata Pointer: Not Supported 00:09:28.884 Oversized SGL: Not Supported 00:09:28.884 SGL Metadata Address: Not Supported 00:09:28.884 SGL Offset: Not Supported 00:09:28.884 Transport SGL Data Block: Not Supported 00:09:28.884 Replay Protected Memory Block: Not Supported 00:09:28.884 00:09:28.884 Firmware Slot Information 00:09:28.884 ========================= 00:09:28.884 Active slot: 1 00:09:28.884 Slot 1 Firmware Revision: 1.0 00:09:28.884 00:09:28.884 00:09:28.884 Commands Supported and Effects 00:09:28.884 ============================== 00:09:28.884 Admin Commands 00:09:28.884 -------------- 00:09:28.884 Delete I/O Submission Queue (00h): Supported 00:09:28.884 Create I/O Submission Queue (01h): Supported 00:09:28.884 Get Log Page (02h): Supported 00:09:28.884 Delete I/O Completion Queue (04h): Supported 00:09:28.884 Create I/O Completion Queue (05h): Supported 00:09:28.884 Identify (06h): Supported 00:09:28.884 Abort (08h): Supported 00:09:28.884 Set Features (09h): Supported 00:09:28.884 Get Features (0Ah): Supported 00:09:28.884 Asynchronous Event Request (0Ch): Supported 00:09:28.884 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.884 Directive Send (19h): Supported 00:09:28.884 Directive Receive (1Ah): Supported 00:09:28.884 Virtualization Management (1Ch): Supported 00:09:28.884 Doorbell Buffer Config (7Ch): Supported 00:09:28.884 Format NVM (80h): Supported LBA-Change 00:09:28.884 I/O Commands 00:09:28.884 ------------ 00:09:28.884 Flush (00h): Supported LBA-Change 00:09:28.884 Write (01h): Supported LBA-Change 00:09:28.884 Read (02h): Supported 00:09:28.884 Compare (05h): Supported 00:09:28.884 Write Zeroes (08h): Supported LBA-Change 00:09:28.884 Dataset Management (09h): Supported LBA-Change 00:09:28.884 Unknown (0Ch): Supported 00:09:28.884 Unknown (12h): Supported 00:09:28.884 Copy (19h): Supported LBA-Change 00:09:28.884 Unknown (1Dh): Supported LBA-Change 00:09:28.884 00:09:28.884 Error Log 00:09:28.884 ========= 00:09:28.884 00:09:28.884 Arbitration 00:09:28.884 =========== 00:09:28.884 Arbitration Burst: no limit 00:09:28.884 00:09:28.884 Power Management 00:09:28.884 ================ 00:09:28.884 Number of Power States: 1 00:09:28.884 Current Power State: Power State #0 00:09:28.884 Power State #0: 00:09:28.884 Max Power: 25.00 W 00:09:28.884 Non-Operational State: Operational 00:09:28.884 Entry Latency: 16 microseconds 00:09:28.884 Exit Latency: 4 microseconds 00:09:28.884 Relative Read Throughput: 0 00:09:28.884 Relative Read Latency: 0 00:09:28.884 Relative Write Throughput: 0 00:09:28.884 Relative Write Latency: 0 00:09:28.884 Idle Power: Not Reported 00:09:28.884 Active Power: Not Reported 00:09:28.884 Non-Operational Permissive Mode: Not Supported 00:09:28.884 00:09:28.884 Health Information 00:09:28.884 ================== 00:09:28.884 Critical Warnings: 00:09:28.884 Available Spare Space: OK 00:09:28.884 Temperature: OK 00:09:28.884 Device Reliability: OK 00:09:28.884 Read Only: No 00:09:28.884 Volatile Memory Backup: OK 00:09:28.884 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.884 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.884 Available Spare: 0% 00:09:28.884 Available Spare Threshold: 0% 00:09:28.884 Life Percentage Used: 0% 00:09:28.884 Data Units Read: 749 00:09:28.884 Data Units Written: 678 00:09:28.884 Host Read Commands: 31911 00:09:28.884 Host Write Commands: 31334 00:09:28.884 Controller Busy Time: 0 minutes 00:09:28.884 Power Cycles: 0 00:09:28.884 Power On Hours: 0 hours 00:09:28.884 Unsafe Shutdowns: 0 00:09:28.884 Unrecoverable Media Errors: 0 00:09:28.884 Lifetime Error Log Entries: 0 00:09:28.884 Warning Temperature Time: 0 minutes 00:09:28.884 Critical Temperature Time: 0 minutes 00:09:28.884 00:09:28.884 Number of Queues 00:09:28.884 ================ 00:09:28.884 Number of I/O Submission Queues: 64 00:09:28.884 Number of I/O Completion Queues: 64 00:09:28.884 00:09:28.884 ZNS Specific Controller Data 00:09:28.884 ============================ 00:09:28.884 Zone Append Size Limit: 0 00:09:28.884 00:09:28.884 00:09:28.884 Active Namespaces 00:09:28.884 ================= 00:09:28.884 Namespace ID:1 00:09:28.884 Error Recovery Timeout: Unlimited 00:09:28.884 Command Set Identifier: NVM (00h) 00:09:28.884 Deallocate: Supported 00:09:28.884 Deallocated/Unwritten Error: Supported 00:09:28.884 Deallocated Read Value: All 0x00 00:09:28.884 Deallocate in Write Zeroes: Not Supported 00:09:28.884 Deallocated Guard Field: 0xFFFF 00:09:28.884 Flush: Supported 00:09:28.884 Reservation: Not Supported 00:09:28.884 Namespace Sharing Capabilities: Multiple Controllers 00:09:28.884 Size (in LBAs): 262144 (1GiB) 00:09:28.884 Capacity (in LBAs): 262144 (1GiB) 00:09:28.884 Utilization (in LBAs): 262144 (1GiB) 00:09:28.884 Thin Provisioning: Not Supported 00:09:28.884 Per-NS Atomic Units: No 00:09:28.884 Maximum Single Source Range Length: 128 00:09:28.884 Maximum Copy Length: 128 00:09:28.884 Maximum Source Range Count: 128 00:09:28.884 NGUID/EUI64 Never Reused: No 00:09:28.884 Namespace Write Protected: No 00:09:28.884 Endurance group ID: 1 00:09:28.884 Number of LBA Formats: 8 00:09:28.884 Current LBA Format: LBA Format #04 00:09:28.884 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.884 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.884 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.884 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.884 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.884 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.884 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.884 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.884 00:09:28.884 Get Feature FDP: 00:09:28.884 ================ 00:09:28.884 Enabled: Yes 00:09:28.884 FDP configuration index: 0 00:09:28.884 00:09:28.884 FDP configurations log page 00:09:28.884 =========================== 00:09:28.884 Number of FDP configurations: 1 00:09:28.884 Version: 0 00:09:28.884 Size: 112 00:09:28.884 FDP Configuration Descriptor: 0 00:09:28.884 Descriptor Size: 96 00:09:28.884 Reclaim Group Identifier format: 2 00:09:28.884 FDP Volatile Write Cache: Not Present 00:09:28.884 FDP Configuration: Valid 00:09:28.884 Vendor Specific Size: 0 00:09:28.884 Number of Reclaim Groups: 2 00:09:28.884 Number of Recalim Unit Handles: 8 00:09:28.884 Max Placement Identifiers: 128 00:09:28.884 Number of Namespaces Suppprted: 256 00:09:28.884 Reclaim unit Nominal Size: 6000000 bytes 00:09:28.884 Estimated Reclaim Unit Time Limit: Not Reported 00:09:28.884 RUH Desc #000: RUH Type: Initially Isolated 00:09:28.884 RUH Desc #001: RUH Type: Initially Isolated 00:09:28.884 RUH Desc #002: RUH Type: Initially Isolated 00:09:28.884 RUH Desc #003: RUH Type: Initially Isolated 00:09:28.884 RUH Desc #004: RUH Type: Initially Isolated 00:09:28.884 RUH Desc #005: RUH Type: Initially Isolated 00:09:28.884 RUH Desc #006: RUH Type: Initially Isolated 00:09:28.884 RUH Desc #007: RUH Type: Initially Isolated 00:09:28.884 00:09:28.884 FDP reclaim unit handle usage log page 00:09:28.884 ==================================[2024-12-05 19:27:56.031698] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63235 terminated unexpected 00:09:28.884 ==== 00:09:28.884 Number of Reclaim Unit Handles: 8 00:09:28.884 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:28.884 RUH Usage Desc #001: RUH Attributes: Unused 00:09:28.884 RUH Usage Desc #002: RUH Attributes: Unused 00:09:28.884 RUH Usage Desc #003: RUH Attributes: Unused 00:09:28.884 RUH Usage Desc #004: RUH Attributes: Unused 00:09:28.884 RUH Usage Desc #005: RUH Attributes: Unused 00:09:28.884 RUH Usage Desc #006: RUH Attributes: Unused 00:09:28.884 RUH Usage Desc #007: RUH Attributes: Unused 00:09:28.884 00:09:28.884 FDP statistics log page 00:09:28.884 ======================= 00:09:28.884 Host bytes with metadata written: 416522240 00:09:28.884 Media bytes with metadata written: 416567296 00:09:28.884 Media bytes erased: 0 00:09:28.884 00:09:28.884 FDP events log page 00:09:28.884 =================== 00:09:28.884 Number of FDP events: 0 00:09:28.884 00:09:28.884 NVM Specific Namespace Data 00:09:28.884 =========================== 00:09:28.884 Logical Block Storage Tag Mask: 0 00:09:28.885 Protection Information Capabilities: 00:09:28.885 16b Guard Protection Information Storage Tag Support: No 00:09:28.885 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:28.885 Storage Tag Check Read Support: No 00:09:28.885 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.885 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.885 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.885 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.885 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.885 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.885 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.885 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.885 ===================================================== 00:09:28.885 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:28.885 ===================================================== 00:09:28.885 Controller Capabilities/Features 00:09:28.885 ================================ 00:09:28.885 Vendor ID: 1b36 00:09:28.885 Subsystem Vendor ID: 1af4 00:09:28.885 Serial Number: 12340 00:09:28.885 Model Number: QEMU NVMe Ctrl 00:09:28.885 Firmware Version: 8.0.0 00:09:28.885 Recommended Arb Burst: 6 00:09:28.885 IEEE OUI Identifier: 00 54 52 00:09:28.885 Multi-path I/O 00:09:28.885 May have multiple subsystem ports: No 00:09:28.885 May have multiple controllers: No 00:09:28.885 Associated with SR-IOV VF: No 00:09:28.885 Max Data Transfer Size: 524288 00:09:28.885 Max Number of Namespaces: 256 00:09:28.885 Max Number of I/O Queues: 64 00:09:28.885 NVMe Specification Version (VS): 1.4 00:09:28.885 NVMe Specification Version (Identify): 1.4 00:09:28.885 Maximum Queue Entries: 2048 00:09:28.885 Contiguous Queues Required: Yes 00:09:28.885 Arbitration Mechanisms Supported 00:09:28.885 Weighted Round Robin: Not Supported 00:09:28.885 Vendor Specific: Not Supported 00:09:28.885 Reset Timeout: 7500 ms 00:09:28.885 Doorbell Stride: 4 bytes 00:09:28.885 NVM Subsystem Reset: Not Supported 00:09:28.885 Command Sets Supported 00:09:28.885 NVM Command Set: Supported 00:09:28.885 Boot Partition: Not Supported 00:09:28.885 Memory Page Size Minimum: 4096 bytes 00:09:28.885 Memory Page Size Maximum: 65536 bytes 00:09:28.885 Persistent Memory Region: Not Supported 00:09:28.885 Optional Asynchronous Events Supported 00:09:28.885 Namespace Attribute Notices: Supported 00:09:28.885 Firmware Activation Notices: Not Supported 00:09:28.885 ANA Change Notices: Not Supported 00:09:28.885 PLE Aggregate Log Change Notices: Not Supported 00:09:28.885 LBA Status Info Alert Notices: Not Supported 00:09:28.885 EGE Aggregate Log Change Notices: Not Supported 00:09:28.885 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.885 Zone Descriptor Change Notices: Not Supported 00:09:28.885 Discovery Log Change Notices: Not Supported 00:09:28.885 Controller Attributes 00:09:28.885 128-bit Host Identifier: Not Supported 00:09:28.885 Non-Operational Permissive Mode: Not Supported 00:09:28.885 NVM Sets: Not Supported 00:09:28.885 Read Recovery Levels: Not Supported 00:09:28.885 Endurance Groups: Not Supported 00:09:28.885 Predictable Latency Mode: Not Supported 00:09:28.885 Traffic Based Keep ALive: Not Supported 00:09:28.885 Namespace Granularity: Not Supported 00:09:28.885 SQ Associations: Not Supported 00:09:28.885 UUID List: Not Supported 00:09:28.885 Multi-Domain Subsystem: Not Supported 00:09:28.885 Fixed Capacity Management: Not Supported 00:09:28.885 Variable Capacity Management: Not Supported 00:09:28.885 Delete Endurance Group: Not Supported 00:09:28.885 Delete NVM Set: Not Supported 00:09:28.885 Extended LBA Formats Supported: Supported 00:09:28.885 Flexible Data Placement Supported: Not Supported 00:09:28.885 00:09:28.885 Controller Memory Buffer Support 00:09:28.885 ================================ 00:09:28.885 Supported: No 00:09:28.885 00:09:28.885 Persistent Memory Region Support 00:09:28.885 ================================ 00:09:28.885 Supported: No 00:09:28.885 00:09:28.885 Admin Command Set Attributes 00:09:28.885 ============================ 00:09:28.885 Security Send/Receive: Not Supported 00:09:28.885 Format NVM: Supported 00:09:28.885 Firmware Activate/Download: Not Supported 00:09:28.885 Namespace Management: Supported 00:09:28.885 Device Self-Test: Not Supported 00:09:28.885 Directives: Supported 00:09:28.885 NVMe-MI: Not Supported 00:09:28.885 Virtualization Management: Not Supported 00:09:28.885 Doorbell Buffer Config: Supported 00:09:28.885 Get LBA Status Capability: Not Supported 00:09:28.885 Command & Feature Lockdown Capability: Not Supported 00:09:28.885 Abort Command Limit: 4 00:09:28.885 Async Event Request Limit: 4 00:09:28.885 Number of Firmware Slots: N/A 00:09:28.885 Firmware Slot 1 Read-Only: N/A 00:09:28.885 Firmware Activation Without Reset: N/A 00:09:28.885 Multiple Update Detection Support: N/A 00:09:28.885 Firmware Update Granularity: No Information Provided 00:09:28.885 Per-Namespace SMART Log: Yes 00:09:28.885 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.885 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:28.885 Command Effects Log Page: Supported 00:09:28.885 Get Log Page Extended Data: Supported 00:09:28.885 Telemetry Log Pages: Not Supported 00:09:28.885 Persistent Event Log Pages: Not Supported 00:09:28.885 Supported Log Pages Log Page: May Support 00:09:28.885 Commands Supported & Effects Log Page: Not Supported 00:09:28.885 Feature Identifiers & Effects Log Page:May Support 00:09:28.885 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.885 Data Area 4 for Telemetry Log: Not Supported 00:09:28.885 Error Log Page Entries Supported: 1 00:09:28.885 Keep Alive: Not Supported 00:09:28.885 00:09:28.885 NVM Command Set Attributes 00:09:28.885 ========================== 00:09:28.885 Submission Queue Entry Size 00:09:28.885 Max: 64 00:09:28.885 Min: 64 00:09:28.885 Completion Queue Entry Size 00:09:28.885 Max: 16 00:09:28.885 Min: 16 00:09:28.885 Number of Namespaces: 256 00:09:28.885 Compare Command: Supported 00:09:28.885 Write Uncorrectable Command: Not Supported 00:09:28.885 Dataset Management Command: Supported 00:09:28.885 Write Zeroes Command: Supported 00:09:28.885 Set Features Save Field: Supported 00:09:28.885 Reservations: Not Supported 00:09:28.885 Timestamp: Supported 00:09:28.885 Copy: Supported 00:09:28.885 Volatile Write Cache: Present 00:09:28.885 Atomic Write Unit (Normal): 1 00:09:28.885 Atomic Write Unit (PFail): 1 00:09:28.885 Atomic Compare & Write Unit: 1 00:09:28.885 Fused Compare & Write: Not Supported 00:09:28.885 Scatter-Gather List 00:09:28.885 SGL Command Set: Supported 00:09:28.885 SGL Keyed: Not Supported 00:09:28.885 SGL Bit Bucket Descriptor: Not Supported 00:09:28.885 SGL Metadata Pointer: Not Supported 00:09:28.885 Oversized SGL: Not Supported 00:09:28.885 SGL Metadata Address: Not Supported 00:09:28.885 SGL Offset: Not Supported 00:09:28.885 Transport SGL Data Block: Not Supported 00:09:28.885 Replay Protected Memory Block: Not Supported 00:09:28.885 00:09:28.885 Firmware Slot Information 00:09:28.885 ========================= 00:09:28.885 Active slot: 1 00:09:28.885 Slot 1 Firmware Revision: 1.0 00:09:28.885 00:09:28.885 00:09:28.885 Commands Supported and Effects 00:09:28.885 ============================== 00:09:28.885 Admin Commands 00:09:28.885 -------------- 00:09:28.885 Delete I/O Submission Queue (00h): Supported 00:09:28.885 Create I/O Submission Queue (01h): Supported 00:09:28.885 Get Log Page (02h): Supported 00:09:28.885 Delete I/O Completion Queue (04h): Supported 00:09:28.885 Create I/O Completion Queue (05h): Supported 00:09:28.885 Identify (06h): Supported 00:09:28.885 Abort (08h): Supported 00:09:28.885 Set Features (09h): Supported 00:09:28.885 Get Features (0Ah): Supported 00:09:28.885 Asynchronous Event Request (0Ch): Supported 00:09:28.885 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.885 Directive Send (19h): Supported 00:09:28.885 Directive Receive (1Ah): Supported 00:09:28.885 Virtualization Management (1Ch): Supported 00:09:28.885 Doorbell Buffer Config (7Ch): Supported 00:09:28.885 Format NVM (80h): Supported LBA-Change 00:09:28.885 I/O Commands 00:09:28.885 ------------ 00:09:28.885 Flush (00h): Supported LBA-Change 00:09:28.885 Write (01h): Supported LBA-Change 00:09:28.885 Read (02h): Supported 00:09:28.885 Compare (05h): Supported 00:09:28.885 Write Zeroes (08h): Supported LBA-Change 00:09:28.885 Dataset Management (09h): Supported LBA-Change 00:09:28.885 Unknown (0Ch): Supported 00:09:28.885 Unknown (12h): Supported 00:09:28.885 Copy (19h): Supported LBA-Change 00:09:28.885 Unknown (1Dh): Supported LBA-Change 00:09:28.885 00:09:28.885 Error Log 00:09:28.885 ========= 00:09:28.885 00:09:28.885 Arbitration 00:09:28.885 =========== 00:09:28.886 Arbitration Burst: no limit 00:09:28.886 00:09:28.886 Power Management 00:09:28.886 ================ 00:09:28.886 Number of Power States: 1 00:09:28.886 Current Power State: Power State #0 00:09:28.886 Power State #0: 00:09:28.886 Max Power: 25.00 W 00:09:28.886 Non-Operational State: Operational 00:09:28.886 Entry Latency: 16 microseconds 00:09:28.886 Exit Latency: 4 microseconds 00:09:28.886 Relative Read Throughput: 0 00:09:28.886 Relative Read Latency: 0 00:09:28.886 Relative Write Throughput: 0 00:09:28.886 Relative Write Latency: 0 00:09:28.886 Idle Power: Not Reported 00:09:28.886 Active Power: Not Reported 00:09:28.886 Non-Operational Permissive Mode: Not Supported 00:09:28.886 00:09:28.886 Health Information 00:09:28.886 ================== 00:09:28.886 Critical Warnings: 00:09:28.886 Available Spare Space: OK 00:09:28.886 Temperature: OK 00:09:28.886 Device Reliability: OK 00:09:28.886 Read Only: No 00:09:28.886 Volatile Memory Backup: OK 00:09:28.886 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.886 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.886 Available Spare: 0% 00:09:28.886 Available Spare Threshold: 0% 00:09:28.886 Life Percentage Used: 0% 00:09:28.886 Data Units Read: 683 00:09:28.886 Data Units Written: 611 00:09:28.886 Host Read Commands: 31155 00:09:28.886 Host Write Commands: 30941 00:09:28.886 Controller Busy Time: 0 minutes 00:09:28.886 Power Cycles: 0 00:09:28.886 Power On Hours: 0 hours 00:09:28.886 Unsafe Shutdowns: 0 00:09:28.886 Unrecoverable Media Errors: 0 00:09:28.886 Lifetime Error Log Entries: 0 00:09:28.886 Warning Temperature Time: 0 minutes 00:09:28.886 Critical Temperature Time: 0 minutes 00:09:28.886 00:09:28.886 Number of Queues 00:09:28.886 ================ 00:09:28.886 Number of I/O Submission Queues: 64 00:09:28.886 Number of I/O Completion Queues: 64 00:09:28.886 00:09:28.886 ZNS Specific Controller Data 00:09:28.886 ============================ 00:09:28.886 Zone Append Size Limit: 0 00:09:28.886 00:09:28.886 00:09:28.886 Active Namespaces 00:09:28.886 ================= 00:09:28.886 Namespace ID:1 00:09:28.886 Error Recovery Timeout: Unlimited 00:09:28.886 Command Set Identifier: NVM (00h) 00:09:28.886 Deallocate: Supported 00:09:28.886 Deallocated/Unwritten Error: Supported 00:09:28.886 Deallocated Read Value: All 0x00 00:09:28.886 Deallocate in Write Zeroes: Not Supported 00:09:28.886 Deallocated Guard Field: 0xFFFF 00:09:28.886 Flush: Supported 00:09:28.886 Reservation: Not Supported 00:09:28.886 Metadata Transferred as: Separate Metadata Buffer 00:09:28.886 Namespace Sharing Capabilities: Private 00:09:28.886 Size (in LBAs): 1548666 (5GiB) 00:09:28.886 Capacity (in LBAs): 1548666 (5GiB) 00:09:28.886 Utilization (in LBAs): 1548666 (5GiB) 00:09:28.886 Thin Provisioning: Not Supported 00:09:28.886 Per-NS Atomic Units: No 00:09:28.886 Maximum Single Source Range Length: 128 00:09:28.886 Maximum Copy Length: 128 00:09:28.886 Maximum Source Range Count: 128 00:09:28.886 NGUID/EUI64 Never Reused: No 00:09:28.886 Namespace Write Protected: No 00:09:28.886 Number of LBA Formats: 8 00:09:28.886 Current LBA Format: [2024-12-05 19:27:56.033193] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63235 terminated unexpected 00:09:28.886 LBA Format #07 00:09:28.886 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.886 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.886 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.886 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.886 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.886 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.886 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.886 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.886 00:09:28.886 NVM Specific Namespace Data 00:09:28.886 =========================== 00:09:28.886 Logical Block Storage Tag Mask: 0 00:09:28.886 Protection Information Capabilities: 00:09:28.886 16b Guard Protection Information Storage Tag Support: No 00:09:28.886 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:28.886 Storage Tag Check Read Support: No 00:09:28.886 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.886 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.886 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.886 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.886 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.886 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.886 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.886 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.886 ===================================================== 00:09:28.886 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:28.886 ===================================================== 00:09:28.886 Controller Capabilities/Features 00:09:28.886 ================================ 00:09:28.886 Vendor ID: 1b36 00:09:28.886 Subsystem Vendor ID: 1af4 00:09:28.886 Serial Number: 12341 00:09:28.886 Model Number: QEMU NVMe Ctrl 00:09:28.886 Firmware Version: 8.0.0 00:09:28.886 Recommended Arb Burst: 6 00:09:28.886 IEEE OUI Identifier: 00 54 52 00:09:28.886 Multi-path I/O 00:09:28.886 May have multiple subsystem ports: No 00:09:28.886 May have multiple controllers: No 00:09:28.886 Associated with SR-IOV VF: No 00:09:28.886 Max Data Transfer Size: 524288 00:09:28.886 Max Number of Namespaces: 256 00:09:28.886 Max Number of I/O Queues: 64 00:09:28.886 NVMe Specification Version (VS): 1.4 00:09:28.886 NVMe Specification Version (Identify): 1.4 00:09:28.886 Maximum Queue Entries: 2048 00:09:28.886 Contiguous Queues Required: Yes 00:09:28.886 Arbitration Mechanisms Supported 00:09:28.886 Weighted Round Robin: Not Supported 00:09:28.886 Vendor Specific: Not Supported 00:09:28.886 Reset Timeout: 7500 ms 00:09:28.886 Doorbell Stride: 4 bytes 00:09:28.886 NVM Subsystem Reset: Not Supported 00:09:28.886 Command Sets Supported 00:09:28.886 NVM Command Set: Supported 00:09:28.886 Boot Partition: Not Supported 00:09:28.886 Memory Page Size Minimum: 4096 bytes 00:09:28.886 Memory Page Size Maximum: 65536 bytes 00:09:28.886 Persistent Memory Region: Not Supported 00:09:28.886 Optional Asynchronous Events Supported 00:09:28.886 Namespace Attribute Notices: Supported 00:09:28.886 Firmware Activation Notices: Not Supported 00:09:28.886 ANA Change Notices: Not Supported 00:09:28.886 PLE Aggregate Log Change Notices: Not Supported 00:09:28.886 LBA Status Info Alert Notices: Not Supported 00:09:28.886 EGE Aggregate Log Change Notices: Not Supported 00:09:28.886 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.886 Zone Descriptor Change Notices: Not Supported 00:09:28.886 Discovery Log Change Notices: Not Supported 00:09:28.886 Controller Attributes 00:09:28.886 128-bit Host Identifier: Not Supported 00:09:28.886 Non-Operational Permissive Mode: Not Supported 00:09:28.886 NVM Sets: Not Supported 00:09:28.886 Read Recovery Levels: Not Supported 00:09:28.886 Endurance Groups: Not Supported 00:09:28.886 Predictable Latency Mode: Not Supported 00:09:28.886 Traffic Based Keep ALive: Not Supported 00:09:28.886 Namespace Granularity: Not Supported 00:09:28.886 SQ Associations: Not Supported 00:09:28.886 UUID List: Not Supported 00:09:28.886 Multi-Domain Subsystem: Not Supported 00:09:28.886 Fixed Capacity Management: Not Supported 00:09:28.886 Variable Capacity Management: Not Supported 00:09:28.886 Delete Endurance Group: Not Supported 00:09:28.886 Delete NVM Set: Not Supported 00:09:28.886 Extended LBA Formats Supported: Supported 00:09:28.886 Flexible Data Placement Supported: Not Supported 00:09:28.886 00:09:28.886 Controller Memory Buffer Support 00:09:28.886 ================================ 00:09:28.886 Supported: No 00:09:28.886 00:09:28.886 Persistent Memory Region Support 00:09:28.886 ================================ 00:09:28.886 Supported: No 00:09:28.886 00:09:28.886 Admin Command Set Attributes 00:09:28.886 ============================ 00:09:28.886 Security Send/Receive: Not Supported 00:09:28.886 Format NVM: Supported 00:09:28.886 Firmware Activate/Download: Not Supported 00:09:28.886 Namespace Management: Supported 00:09:28.886 Device Self-Test: Not Supported 00:09:28.886 Directives: Supported 00:09:28.886 NVMe-MI: Not Supported 00:09:28.886 Virtualization Management: Not Supported 00:09:28.886 Doorbell Buffer Config: Supported 00:09:28.886 Get LBA Status Capability: Not Supported 00:09:28.886 Command & Feature Lockdown Capability: Not Supported 00:09:28.886 Abort Command Limit: 4 00:09:28.886 Async Event Request Limit: 4 00:09:28.886 Number of Firmware Slots: N/A 00:09:28.886 Firmware Slot 1 Read-Only: N/A 00:09:28.887 Firmware Activation Without Reset: N/A 00:09:28.887 Multiple Update Detection Support: N/A 00:09:28.887 Firmware Update Granularity: No Information Provided 00:09:28.887 Per-Namespace SMART Log: Yes 00:09:28.887 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.887 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:28.887 Command Effects Log Page: Supported 00:09:28.887 Get Log Page Extended Data: Supported 00:09:28.887 Telemetry Log Pages: Not Supported 00:09:28.887 Persistent Event Log Pages: Not Supported 00:09:28.887 Supported Log Pages Log Page: May Support 00:09:28.887 Commands Supported & Effects Log Page: Not Supported 00:09:28.887 Feature Identifiers & Effects Log Page:May Support 00:09:28.887 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.887 Data Area 4 for Telemetry Log: Not Supported 00:09:28.887 Error Log Page Entries Supported: 1 00:09:28.887 Keep Alive: Not Supported 00:09:28.887 00:09:28.887 NVM Command Set Attributes 00:09:28.887 ========================== 00:09:28.887 Submission Queue Entry Size 00:09:28.887 Max: 64 00:09:28.887 Min: 64 00:09:28.887 Completion Queue Entry Size 00:09:28.887 Max: 16 00:09:28.887 Min: 16 00:09:28.887 Number of Namespaces: 256 00:09:28.887 Compare Command: Supported 00:09:28.887 Write Uncorrectable Command: Not Supported 00:09:28.887 Dataset Management Command: Supported 00:09:28.887 Write Zeroes Command: Supported 00:09:28.887 Set Features Save Field: Supported 00:09:28.887 Reservations: Not Supported 00:09:28.887 Timestamp: Supported 00:09:28.887 Copy: Supported 00:09:28.887 Volatile Write Cache: Present 00:09:28.887 Atomic Write Unit (Normal): 1 00:09:28.887 Atomic Write Unit (PFail): 1 00:09:28.887 Atomic Compare & Write Unit: 1 00:09:28.887 Fused Compare & Write: Not Supported 00:09:28.887 Scatter-Gather List 00:09:28.887 SGL Command Set: Supported 00:09:28.887 SGL Keyed: Not Supported 00:09:28.887 SGL Bit Bucket Descriptor: Not Supported 00:09:28.887 SGL Metadata Pointer: Not Supported 00:09:28.887 Oversized SGL: Not Supported 00:09:28.887 SGL Metadata Address: Not Supported 00:09:28.887 SGL Offset: Not Supported 00:09:28.887 Transport SGL Data Block: Not Supported 00:09:28.887 Replay Protected Memory Block: Not Supported 00:09:28.887 00:09:28.887 Firmware Slot Information 00:09:28.887 ========================= 00:09:28.887 Active slot: 1 00:09:28.887 Slot 1 Firmware Revision: 1.0 00:09:28.887 00:09:28.887 00:09:28.887 Commands Supported and Effects 00:09:28.887 ============================== 00:09:28.887 Admin Commands 00:09:28.887 -------------- 00:09:28.887 Delete I/O Submission Queue (00h): Supported 00:09:28.887 Create I/O Submission Queue (01h): Supported 00:09:28.887 Get Log Page (02h): Supported 00:09:28.887 Delete I/O Completion Queue (04h): Supported 00:09:28.887 Create I/O Completion Queue (05h): Supported 00:09:28.887 Identify (06h): Supported 00:09:28.887 Abort (08h): Supported 00:09:28.887 Set Features (09h): Supported 00:09:28.887 Get Features (0Ah): Supported 00:09:28.887 Asynchronous Event Request (0Ch): Supported 00:09:28.887 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.887 Directive Send (19h): Supported 00:09:28.887 Directive Receive (1Ah): Supported 00:09:28.887 Virtualization Management (1Ch): Supported 00:09:28.887 Doorbell Buffer Config (7Ch): Supported 00:09:28.887 Format NVM (80h): Supported LBA-Change 00:09:28.887 I/O Commands 00:09:28.887 ------------ 00:09:28.887 Flush (00h): Supported LBA-Change 00:09:28.887 Write (01h): Supported LBA-Change 00:09:28.887 Read (02h): Supported 00:09:28.887 Compare (05h): Supported 00:09:28.887 Write Zeroes (08h): Supported LBA-Change 00:09:28.887 Dataset Management (09h): Supported LBA-Change 00:09:28.887 Unknown (0Ch): Supported 00:09:28.887 Unknown (12h): Supported 00:09:28.887 Copy (19h): Supported LBA-Change 00:09:28.887 Unknown (1Dh): Supported LBA-Change 00:09:28.887 00:09:28.887 Error Log 00:09:28.887 ========= 00:09:28.887 00:09:28.887 Arbitration 00:09:28.887 =========== 00:09:28.887 Arbitration Burst: no limit 00:09:28.887 00:09:28.887 Power Management 00:09:28.887 ================ 00:09:28.887 Number of Power States: 1 00:09:28.887 Current Power State: Power State #0 00:09:28.887 Power State #0: 00:09:28.887 Max Power: 25.00 W 00:09:28.887 Non-Operational State: Operational 00:09:28.887 Entry Latency: 16 microseconds 00:09:28.887 Exit Latency: 4 microseconds 00:09:28.887 Relative Read Throughput: 0 00:09:28.887 Relative Read Latency: 0 00:09:28.887 Relative Write Throughput: 0 00:09:28.887 Relative Write Latency: 0 00:09:28.887 Idle Power: Not Reported 00:09:28.887 Active Power: Not Reported 00:09:28.887 Non-Operational Permissive Mode: Not Supported 00:09:28.887 00:09:28.887 Health Information 00:09:28.887 ================== 00:09:28.887 Critical Warnings: 00:09:28.887 Available Spare Space: OK 00:09:28.887 Temperature: OK 00:09:28.887 Device Reliability: OK 00:09:28.887 Read Only: No 00:09:28.887 Volatile Memory Backup: OK 00:09:28.887 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.887 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.887 Available Spare: 0% 00:09:28.887 Available Spare Threshold: 0% 00:09:28.887 Life Percentage Used: 0% 00:09:28.887 Data Units Read: 1002 00:09:28.887 Data Units Written: 875 00:09:28.887 Host Read Commands: 46478 00:09:28.887 Host Write Commands: 45367 00:09:28.887 Controller Busy Time: 0 minutes 00:09:28.887 Power Cycles: 0 00:09:28.887 Power On Hours: 0 hours 00:09:28.887 Unsafe Shutdowns: 0 00:09:28.887 Unrecoverable Media Errors: 0 00:09:28.887 Lifetime Error Log Entries: 0 00:09:28.887 Warning Temperature Time: 0 minutes 00:09:28.887 Critical Temperature Time: 0 minutes 00:09:28.887 00:09:28.887 Number of Queues 00:09:28.887 ================ 00:09:28.887 Number of I/O Submission Queues: 64 00:09:28.887 Number of I/O Completion Queues: 64 00:09:28.887 00:09:28.887 ZNS Specific Controller Data 00:09:28.887 ============================ 00:09:28.887 Zone Append Size Limit: 0 00:09:28.887 00:09:28.887 00:09:28.887 Active Namespaces 00:09:28.887 ================= 00:09:28.887 Namespace ID:1 00:09:28.887 Error Recovery Timeout: Unlimited 00:09:28.887 Command Set Identifier: NVM (00h) 00:09:28.887 Deallocate: Supported 00:09:28.887 Deallocated/Unwritten Error: Supported 00:09:28.887 Deallocated Read Value: All 0x00 00:09:28.887 Deallocate in Write Zeroes: Not Supported 00:09:28.887 Deallocated Guard Field: 0xFFFF 00:09:28.887 Flush: Supported 00:09:28.887 Reservation: Not Supported 00:09:28.887 Namespace Sharing Capabilities: Private 00:09:28.887 Size (in LBAs): 1310720 (5GiB) 00:09:28.887 Capacity (in LBAs): 1310720 (5GiB) 00:09:28.887 Utilization (in LBAs): 1310720 (5GiB) 00:09:28.887 Thin Provisioning: Not Supported 00:09:28.887 Per-NS Atomic Units: No 00:09:28.887 Maximum Single Source Range Length: 128 00:09:28.887 Maximum Copy Length: 128 00:09:28.887 Maximum Source Range Count: 128 00:09:28.887 NGUID/EUI64 Never Reused: No 00:09:28.887 Namespace Write Protected: No 00:09:28.887 Number of LBA Formats: 8 00:09:28.887 Current LBA Format: LBA Format #04 00:09:28.887 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.887 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.887 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.887 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.887 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.887 LBA Forma[2024-12-05 19:27:56.034578] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63235 terminated unexpected 00:09:28.887 t #05: Data Size: 4096 Metadata Size: 8 00:09:28.887 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.887 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.887 00:09:28.887 NVM Specific Namespace Data 00:09:28.887 =========================== 00:09:28.887 Logical Block Storage Tag Mask: 0 00:09:28.887 Protection Information Capabilities: 00:09:28.887 16b Guard Protection Information Storage Tag Support: No 00:09:28.887 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:28.887 Storage Tag Check Read Support: No 00:09:28.887 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.887 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.887 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.887 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.887 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.887 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.887 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.887 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.887 ===================================================== 00:09:28.887 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:28.887 ===================================================== 00:09:28.887 Controller Capabilities/Features 00:09:28.888 ================================ 00:09:28.888 Vendor ID: 1b36 00:09:28.888 Subsystem Vendor ID: 1af4 00:09:28.888 Serial Number: 12342 00:09:28.888 Model Number: QEMU NVMe Ctrl 00:09:28.888 Firmware Version: 8.0.0 00:09:28.888 Recommended Arb Burst: 6 00:09:28.888 IEEE OUI Identifier: 00 54 52 00:09:28.888 Multi-path I/O 00:09:28.888 May have multiple subsystem ports: No 00:09:28.888 May have multiple controllers: No 00:09:28.888 Associated with SR-IOV VF: No 00:09:28.888 Max Data Transfer Size: 524288 00:09:28.888 Max Number of Namespaces: 256 00:09:28.888 Max Number of I/O Queues: 64 00:09:28.888 NVMe Specification Version (VS): 1.4 00:09:28.888 NVMe Specification Version (Identify): 1.4 00:09:28.888 Maximum Queue Entries: 2048 00:09:28.888 Contiguous Queues Required: Yes 00:09:28.888 Arbitration Mechanisms Supported 00:09:28.888 Weighted Round Robin: Not Supported 00:09:28.888 Vendor Specific: Not Supported 00:09:28.888 Reset Timeout: 7500 ms 00:09:28.888 Doorbell Stride: 4 bytes 00:09:28.888 NVM Subsystem Reset: Not Supported 00:09:28.888 Command Sets Supported 00:09:28.888 NVM Command Set: Supported 00:09:28.888 Boot Partition: Not Supported 00:09:28.888 Memory Page Size Minimum: 4096 bytes 00:09:28.888 Memory Page Size Maximum: 65536 bytes 00:09:28.888 Persistent Memory Region: Not Supported 00:09:28.888 Optional Asynchronous Events Supported 00:09:28.888 Namespace Attribute Notices: Supported 00:09:28.888 Firmware Activation Notices: Not Supported 00:09:28.888 ANA Change Notices: Not Supported 00:09:28.888 PLE Aggregate Log Change Notices: Not Supported 00:09:28.888 LBA Status Info Alert Notices: Not Supported 00:09:28.888 EGE Aggregate Log Change Notices: Not Supported 00:09:28.888 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.888 Zone Descriptor Change Notices: Not Supported 00:09:28.888 Discovery Log Change Notices: Not Supported 00:09:28.888 Controller Attributes 00:09:28.888 128-bit Host Identifier: Not Supported 00:09:28.888 Non-Operational Permissive Mode: Not Supported 00:09:28.888 NVM Sets: Not Supported 00:09:28.888 Read Recovery Levels: Not Supported 00:09:28.888 Endurance Groups: Not Supported 00:09:28.888 Predictable Latency Mode: Not Supported 00:09:28.888 Traffic Based Keep ALive: Not Supported 00:09:28.888 Namespace Granularity: Not Supported 00:09:28.888 SQ Associations: Not Supported 00:09:28.888 UUID List: Not Supported 00:09:28.888 Multi-Domain Subsystem: Not Supported 00:09:28.888 Fixed Capacity Management: Not Supported 00:09:28.888 Variable Capacity Management: Not Supported 00:09:28.888 Delete Endurance Group: Not Supported 00:09:28.888 Delete NVM Set: Not Supported 00:09:28.888 Extended LBA Formats Supported: Supported 00:09:28.888 Flexible Data Placement Supported: Not Supported 00:09:28.888 00:09:28.888 Controller Memory Buffer Support 00:09:28.888 ================================ 00:09:28.888 Supported: No 00:09:28.888 00:09:28.888 Persistent Memory Region Support 00:09:28.888 ================================ 00:09:28.888 Supported: No 00:09:28.888 00:09:28.888 Admin Command Set Attributes 00:09:28.888 ============================ 00:09:28.888 Security Send/Receive: Not Supported 00:09:28.888 Format NVM: Supported 00:09:28.888 Firmware Activate/Download: Not Supported 00:09:28.888 Namespace Management: Supported 00:09:28.888 Device Self-Test: Not Supported 00:09:28.888 Directives: Supported 00:09:28.888 NVMe-MI: Not Supported 00:09:28.888 Virtualization Management: Not Supported 00:09:28.888 Doorbell Buffer Config: Supported 00:09:28.888 Get LBA Status Capability: Not Supported 00:09:28.888 Command & Feature Lockdown Capability: Not Supported 00:09:28.888 Abort Command Limit: 4 00:09:28.888 Async Event Request Limit: 4 00:09:28.888 Number of Firmware Slots: N/A 00:09:28.888 Firmware Slot 1 Read-Only: N/A 00:09:28.888 Firmware Activation Without Reset: N/A 00:09:28.888 Multiple Update Detection Support: N/A 00:09:28.888 Firmware Update Granularity: No Information Provided 00:09:28.888 Per-Namespace SMART Log: Yes 00:09:28.888 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.888 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:28.888 Command Effects Log Page: Supported 00:09:28.888 Get Log Page Extended Data: Supported 00:09:28.888 Telemetry Log Pages: Not Supported 00:09:28.888 Persistent Event Log Pages: Not Supported 00:09:28.888 Supported Log Pages Log Page: May Support 00:09:28.888 Commands Supported & Effects Log Page: Not Supported 00:09:28.888 Feature Identifiers & Effects Log Page:May Support 00:09:28.888 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.888 Data Area 4 for Telemetry Log: Not Supported 00:09:28.888 Error Log Page Entries Supported: 1 00:09:28.888 Keep Alive: Not Supported 00:09:28.888 00:09:28.888 NVM Command Set Attributes 00:09:28.888 ========================== 00:09:28.888 Submission Queue Entry Size 00:09:28.888 Max: 64 00:09:28.888 Min: 64 00:09:28.888 Completion Queue Entry Size 00:09:28.888 Max: 16 00:09:28.888 Min: 16 00:09:28.888 Number of Namespaces: 256 00:09:28.888 Compare Command: Supported 00:09:28.888 Write Uncorrectable Command: Not Supported 00:09:28.888 Dataset Management Command: Supported 00:09:28.888 Write Zeroes Command: Supported 00:09:28.888 Set Features Save Field: Supported 00:09:28.888 Reservations: Not Supported 00:09:28.888 Timestamp: Supported 00:09:28.888 Copy: Supported 00:09:28.888 Volatile Write Cache: Present 00:09:28.888 Atomic Write Unit (Normal): 1 00:09:28.888 Atomic Write Unit (PFail): 1 00:09:28.888 Atomic Compare & Write Unit: 1 00:09:28.888 Fused Compare & Write: Not Supported 00:09:28.888 Scatter-Gather List 00:09:28.888 SGL Command Set: Supported 00:09:28.888 SGL Keyed: Not Supported 00:09:28.888 SGL Bit Bucket Descriptor: Not Supported 00:09:28.888 SGL Metadata Pointer: Not Supported 00:09:28.888 Oversized SGL: Not Supported 00:09:28.888 SGL Metadata Address: Not Supported 00:09:28.888 SGL Offset: Not Supported 00:09:28.888 Transport SGL Data Block: Not Supported 00:09:28.888 Replay Protected Memory Block: Not Supported 00:09:28.888 00:09:28.888 Firmware Slot Information 00:09:28.888 ========================= 00:09:28.888 Active slot: 1 00:09:28.888 Slot 1 Firmware Revision: 1.0 00:09:28.888 00:09:28.888 00:09:28.888 Commands Supported and Effects 00:09:28.888 ============================== 00:09:28.888 Admin Commands 00:09:28.888 -------------- 00:09:28.888 Delete I/O Submission Queue (00h): Supported 00:09:28.888 Create I/O Submission Queue (01h): Supported 00:09:28.888 Get Log Page (02h): Supported 00:09:28.888 Delete I/O Completion Queue (04h): Supported 00:09:28.888 Create I/O Completion Queue (05h): Supported 00:09:28.888 Identify (06h): Supported 00:09:28.888 Abort (08h): Supported 00:09:28.888 Set Features (09h): Supported 00:09:28.888 Get Features (0Ah): Supported 00:09:28.888 Asynchronous Event Request (0Ch): Supported 00:09:28.888 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.888 Directive Send (19h): Supported 00:09:28.888 Directive Receive (1Ah): Supported 00:09:28.888 Virtualization Management (1Ch): Supported 00:09:28.888 Doorbell Buffer Config (7Ch): Supported 00:09:28.888 Format NVM (80h): Supported LBA-Change 00:09:28.888 I/O Commands 00:09:28.888 ------------ 00:09:28.888 Flush (00h): Supported LBA-Change 00:09:28.888 Write (01h): Supported LBA-Change 00:09:28.888 Read (02h): Supported 00:09:28.888 Compare (05h): Supported 00:09:28.888 Write Zeroes (08h): Supported LBA-Change 00:09:28.888 Dataset Management (09h): Supported LBA-Change 00:09:28.888 Unknown (0Ch): Supported 00:09:28.889 Unknown (12h): Supported 00:09:28.889 Copy (19h): Supported LBA-Change 00:09:28.889 Unknown (1Dh): Supported LBA-Change 00:09:28.889 00:09:28.889 Error Log 00:09:28.889 ========= 00:09:28.889 00:09:28.889 Arbitration 00:09:28.889 =========== 00:09:28.889 Arbitration Burst: no limit 00:09:28.889 00:09:28.889 Power Management 00:09:28.889 ================ 00:09:28.889 Number of Power States: 1 00:09:28.889 Current Power State: Power State #0 00:09:28.889 Power State #0: 00:09:28.889 Max Power: 25.00 W 00:09:28.889 Non-Operational State: Operational 00:09:28.889 Entry Latency: 16 microseconds 00:09:28.889 Exit Latency: 4 microseconds 00:09:28.889 Relative Read Throughput: 0 00:09:28.889 Relative Read Latency: 0 00:09:28.889 Relative Write Throughput: 0 00:09:28.889 Relative Write Latency: 0 00:09:28.889 Idle Power: Not Reported 00:09:28.889 Active Power: Not Reported 00:09:28.889 Non-Operational Permissive Mode: Not Supported 00:09:28.889 00:09:28.889 Health Information 00:09:28.889 ================== 00:09:28.889 Critical Warnings: 00:09:28.889 Available Spare Space: OK 00:09:28.889 Temperature: OK 00:09:28.889 Device Reliability: OK 00:09:28.889 Read Only: No 00:09:28.889 Volatile Memory Backup: OK 00:09:28.889 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.889 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.889 Available Spare: 0% 00:09:28.889 Available Spare Threshold: 0% 00:09:28.889 Life Percentage Used: 0% 00:09:28.889 Data Units Read: 2096 00:09:28.889 Data Units Written: 1884 00:09:28.889 Host Read Commands: 94258 00:09:28.889 Host Write Commands: 92527 00:09:28.889 Controller Busy Time: 0 minutes 00:09:28.889 Power Cycles: 0 00:09:28.889 Power On Hours: 0 hours 00:09:28.889 Unsafe Shutdowns: 0 00:09:28.889 Unrecoverable Media Errors: 0 00:09:28.889 Lifetime Error Log Entries: 0 00:09:28.889 Warning Temperature Time: 0 minutes 00:09:28.889 Critical Temperature Time: 0 minutes 00:09:28.889 00:09:28.889 Number of Queues 00:09:28.889 ================ 00:09:28.889 Number of I/O Submission Queues: 64 00:09:28.889 Number of I/O Completion Queues: 64 00:09:28.889 00:09:28.889 ZNS Specific Controller Data 00:09:28.889 ============================ 00:09:28.889 Zone Append Size Limit: 0 00:09:28.889 00:09:28.889 00:09:28.889 Active Namespaces 00:09:28.889 ================= 00:09:28.889 Namespace ID:1 00:09:28.889 Error Recovery Timeout: Unlimited 00:09:28.889 Command Set Identifier: NVM (00h) 00:09:28.889 Deallocate: Supported 00:09:28.889 Deallocated/Unwritten Error: Supported 00:09:28.889 Deallocated Read Value: All 0x00 00:09:28.889 Deallocate in Write Zeroes: Not Supported 00:09:28.889 Deallocated Guard Field: 0xFFFF 00:09:28.889 Flush: Supported 00:09:28.889 Reservation: Not Supported 00:09:28.889 Namespace Sharing Capabilities: Private 00:09:28.889 Size (in LBAs): 1048576 (4GiB) 00:09:28.889 Capacity (in LBAs): 1048576 (4GiB) 00:09:28.889 Utilization (in LBAs): 1048576 (4GiB) 00:09:28.889 Thin Provisioning: Not Supported 00:09:28.889 Per-NS Atomic Units: No 00:09:28.889 Maximum Single Source Range Length: 128 00:09:28.889 Maximum Copy Length: 128 00:09:28.889 Maximum Source Range Count: 128 00:09:28.889 NGUID/EUI64 Never Reused: No 00:09:28.889 Namespace Write Protected: No 00:09:28.889 Number of LBA Formats: 8 00:09:28.889 Current LBA Format: LBA Format #04 00:09:28.889 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.889 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.889 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.889 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.889 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.889 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.889 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.889 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.889 00:09:28.889 NVM Specific Namespace Data 00:09:28.889 =========================== 00:09:28.889 Logical Block Storage Tag Mask: 0 00:09:28.889 Protection Information Capabilities: 00:09:28.889 16b Guard Protection Information Storage Tag Support: No 00:09:28.889 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:28.889 Storage Tag Check Read Support: No 00:09:28.889 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Namespace ID:2 00:09:28.889 Error Recovery Timeout: Unlimited 00:09:28.889 Command Set Identifier: NVM (00h) 00:09:28.889 Deallocate: Supported 00:09:28.889 Deallocated/Unwritten Error: Supported 00:09:28.889 Deallocated Read Value: All 0x00 00:09:28.889 Deallocate in Write Zeroes: Not Supported 00:09:28.889 Deallocated Guard Field: 0xFFFF 00:09:28.889 Flush: Supported 00:09:28.889 Reservation: Not Supported 00:09:28.889 Namespace Sharing Capabilities: Private 00:09:28.889 Size (in LBAs): 1048576 (4GiB) 00:09:28.889 Capacity (in LBAs): 1048576 (4GiB) 00:09:28.889 Utilization (in LBAs): 1048576 (4GiB) 00:09:28.889 Thin Provisioning: Not Supported 00:09:28.889 Per-NS Atomic Units: No 00:09:28.889 Maximum Single Source Range Length: 128 00:09:28.889 Maximum Copy Length: 128 00:09:28.889 Maximum Source Range Count: 128 00:09:28.889 NGUID/EUI64 Never Reused: No 00:09:28.889 Namespace Write Protected: No 00:09:28.889 Number of LBA Formats: 8 00:09:28.889 Current LBA Format: LBA Format #04 00:09:28.889 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.889 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.889 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.889 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.889 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.889 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.889 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.889 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.889 00:09:28.889 NVM Specific Namespace Data 00:09:28.889 =========================== 00:09:28.889 Logical Block Storage Tag Mask: 0 00:09:28.889 Protection Information Capabilities: 00:09:28.889 16b Guard Protection Information Storage Tag Support: No 00:09:28.889 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:28.889 Storage Tag Check Read Support: No 00:09:28.889 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.889 Namespace ID:3 00:09:28.889 Error Recovery Timeout: Unlimited 00:09:28.889 Command Set Identifier: NVM (00h) 00:09:28.889 Deallocate: Supported 00:09:28.889 Deallocated/Unwritten Error: Supported 00:09:28.889 Deallocated Read Value: All 0x00 00:09:28.889 Deallocate in Write Zeroes: Not Supported 00:09:28.889 Deallocated Guard Field: 0xFFFF 00:09:28.889 Flush: Supported 00:09:28.889 Reservation: Not Supported 00:09:28.889 Namespace Sharing Capabilities: Private 00:09:28.889 Size (in LBAs): 1048576 (4GiB) 00:09:28.889 Capacity (in LBAs): 1048576 (4GiB) 00:09:28.889 Utilization (in LBAs): 1048576 (4GiB) 00:09:28.889 Thin Provisioning: Not Supported 00:09:28.889 Per-NS Atomic Units: No 00:09:28.889 Maximum Single Source Range Length: 128 00:09:28.889 Maximum Copy Length: 128 00:09:28.889 Maximum Source Range Count: 128 00:09:28.889 NGUID/EUI64 Never Reused: No 00:09:28.889 Namespace Write Protected: No 00:09:28.889 Number of LBA Formats: 8 00:09:28.889 Current LBA Format: LBA Format #04 00:09:28.889 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.889 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.889 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.889 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.889 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.889 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.889 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.889 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.889 00:09:28.889 NVM Specific Namespace Data 00:09:28.889 =========================== 00:09:28.889 Logical Block Storage Tag Mask: 0 00:09:28.890 Protection Information Capabilities: 00:09:28.890 16b Guard Protection Information Storage Tag Support: No 00:09:28.890 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:28.890 Storage Tag Check Read Support: No 00:09:28.890 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.890 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.890 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.890 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.890 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.890 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.890 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.890 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:28.890 19:27:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:28.890 19:27:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:29.150 ===================================================== 00:09:29.150 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:29.150 ===================================================== 00:09:29.150 Controller Capabilities/Features 00:09:29.150 ================================ 00:09:29.150 Vendor ID: 1b36 00:09:29.150 Subsystem Vendor ID: 1af4 00:09:29.150 Serial Number: 12340 00:09:29.150 Model Number: QEMU NVMe Ctrl 00:09:29.150 Firmware Version: 8.0.0 00:09:29.150 Recommended Arb Burst: 6 00:09:29.150 IEEE OUI Identifier: 00 54 52 00:09:29.150 Multi-path I/O 00:09:29.150 May have multiple subsystem ports: No 00:09:29.150 May have multiple controllers: No 00:09:29.150 Associated with SR-IOV VF: No 00:09:29.150 Max Data Transfer Size: 524288 00:09:29.150 Max Number of Namespaces: 256 00:09:29.150 Max Number of I/O Queues: 64 00:09:29.150 NVMe Specification Version (VS): 1.4 00:09:29.150 NVMe Specification Version (Identify): 1.4 00:09:29.150 Maximum Queue Entries: 2048 00:09:29.150 Contiguous Queues Required: Yes 00:09:29.150 Arbitration Mechanisms Supported 00:09:29.150 Weighted Round Robin: Not Supported 00:09:29.150 Vendor Specific: Not Supported 00:09:29.150 Reset Timeout: 7500 ms 00:09:29.150 Doorbell Stride: 4 bytes 00:09:29.150 NVM Subsystem Reset: Not Supported 00:09:29.150 Command Sets Supported 00:09:29.150 NVM Command Set: Supported 00:09:29.150 Boot Partition: Not Supported 00:09:29.150 Memory Page Size Minimum: 4096 bytes 00:09:29.150 Memory Page Size Maximum: 65536 bytes 00:09:29.150 Persistent Memory Region: Not Supported 00:09:29.150 Optional Asynchronous Events Supported 00:09:29.150 Namespace Attribute Notices: Supported 00:09:29.151 Firmware Activation Notices: Not Supported 00:09:29.151 ANA Change Notices: Not Supported 00:09:29.151 PLE Aggregate Log Change Notices: Not Supported 00:09:29.151 LBA Status Info Alert Notices: Not Supported 00:09:29.151 EGE Aggregate Log Change Notices: Not Supported 00:09:29.151 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.151 Zone Descriptor Change Notices: Not Supported 00:09:29.151 Discovery Log Change Notices: Not Supported 00:09:29.151 Controller Attributes 00:09:29.151 128-bit Host Identifier: Not Supported 00:09:29.151 Non-Operational Permissive Mode: Not Supported 00:09:29.151 NVM Sets: Not Supported 00:09:29.151 Read Recovery Levels: Not Supported 00:09:29.151 Endurance Groups: Not Supported 00:09:29.151 Predictable Latency Mode: Not Supported 00:09:29.151 Traffic Based Keep ALive: Not Supported 00:09:29.151 Namespace Granularity: Not Supported 00:09:29.151 SQ Associations: Not Supported 00:09:29.151 UUID List: Not Supported 00:09:29.151 Multi-Domain Subsystem: Not Supported 00:09:29.151 Fixed Capacity Management: Not Supported 00:09:29.151 Variable Capacity Management: Not Supported 00:09:29.151 Delete Endurance Group: Not Supported 00:09:29.151 Delete NVM Set: Not Supported 00:09:29.151 Extended LBA Formats Supported: Supported 00:09:29.151 Flexible Data Placement Supported: Not Supported 00:09:29.151 00:09:29.151 Controller Memory Buffer Support 00:09:29.151 ================================ 00:09:29.151 Supported: No 00:09:29.151 00:09:29.151 Persistent Memory Region Support 00:09:29.151 ================================ 00:09:29.151 Supported: No 00:09:29.151 00:09:29.151 Admin Command Set Attributes 00:09:29.151 ============================ 00:09:29.151 Security Send/Receive: Not Supported 00:09:29.151 Format NVM: Supported 00:09:29.151 Firmware Activate/Download: Not Supported 00:09:29.151 Namespace Management: Supported 00:09:29.151 Device Self-Test: Not Supported 00:09:29.151 Directives: Supported 00:09:29.151 NVMe-MI: Not Supported 00:09:29.151 Virtualization Management: Not Supported 00:09:29.151 Doorbell Buffer Config: Supported 00:09:29.151 Get LBA Status Capability: Not Supported 00:09:29.151 Command & Feature Lockdown Capability: Not Supported 00:09:29.151 Abort Command Limit: 4 00:09:29.151 Async Event Request Limit: 4 00:09:29.151 Number of Firmware Slots: N/A 00:09:29.151 Firmware Slot 1 Read-Only: N/A 00:09:29.151 Firmware Activation Without Reset: N/A 00:09:29.151 Multiple Update Detection Support: N/A 00:09:29.151 Firmware Update Granularity: No Information Provided 00:09:29.151 Per-Namespace SMART Log: Yes 00:09:29.151 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.151 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:29.151 Command Effects Log Page: Supported 00:09:29.151 Get Log Page Extended Data: Supported 00:09:29.151 Telemetry Log Pages: Not Supported 00:09:29.151 Persistent Event Log Pages: Not Supported 00:09:29.151 Supported Log Pages Log Page: May Support 00:09:29.151 Commands Supported & Effects Log Page: Not Supported 00:09:29.151 Feature Identifiers & Effects Log Page:May Support 00:09:29.151 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.151 Data Area 4 for Telemetry Log: Not Supported 00:09:29.151 Error Log Page Entries Supported: 1 00:09:29.151 Keep Alive: Not Supported 00:09:29.151 00:09:29.151 NVM Command Set Attributes 00:09:29.151 ========================== 00:09:29.151 Submission Queue Entry Size 00:09:29.151 Max: 64 00:09:29.151 Min: 64 00:09:29.151 Completion Queue Entry Size 00:09:29.151 Max: 16 00:09:29.151 Min: 16 00:09:29.151 Number of Namespaces: 256 00:09:29.151 Compare Command: Supported 00:09:29.151 Write Uncorrectable Command: Not Supported 00:09:29.151 Dataset Management Command: Supported 00:09:29.151 Write Zeroes Command: Supported 00:09:29.151 Set Features Save Field: Supported 00:09:29.151 Reservations: Not Supported 00:09:29.151 Timestamp: Supported 00:09:29.151 Copy: Supported 00:09:29.151 Volatile Write Cache: Present 00:09:29.151 Atomic Write Unit (Normal): 1 00:09:29.151 Atomic Write Unit (PFail): 1 00:09:29.151 Atomic Compare & Write Unit: 1 00:09:29.151 Fused Compare & Write: Not Supported 00:09:29.151 Scatter-Gather List 00:09:29.151 SGL Command Set: Supported 00:09:29.151 SGL Keyed: Not Supported 00:09:29.151 SGL Bit Bucket Descriptor: Not Supported 00:09:29.151 SGL Metadata Pointer: Not Supported 00:09:29.151 Oversized SGL: Not Supported 00:09:29.151 SGL Metadata Address: Not Supported 00:09:29.151 SGL Offset: Not Supported 00:09:29.151 Transport SGL Data Block: Not Supported 00:09:29.151 Replay Protected Memory Block: Not Supported 00:09:29.151 00:09:29.151 Firmware Slot Information 00:09:29.151 ========================= 00:09:29.151 Active slot: 1 00:09:29.151 Slot 1 Firmware Revision: 1.0 00:09:29.151 00:09:29.151 00:09:29.151 Commands Supported and Effects 00:09:29.151 ============================== 00:09:29.151 Admin Commands 00:09:29.151 -------------- 00:09:29.151 Delete I/O Submission Queue (00h): Supported 00:09:29.151 Create I/O Submission Queue (01h): Supported 00:09:29.151 Get Log Page (02h): Supported 00:09:29.151 Delete I/O Completion Queue (04h): Supported 00:09:29.151 Create I/O Completion Queue (05h): Supported 00:09:29.151 Identify (06h): Supported 00:09:29.151 Abort (08h): Supported 00:09:29.151 Set Features (09h): Supported 00:09:29.151 Get Features (0Ah): Supported 00:09:29.151 Asynchronous Event Request (0Ch): Supported 00:09:29.151 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.151 Directive Send (19h): Supported 00:09:29.151 Directive Receive (1Ah): Supported 00:09:29.151 Virtualization Management (1Ch): Supported 00:09:29.151 Doorbell Buffer Config (7Ch): Supported 00:09:29.151 Format NVM (80h): Supported LBA-Change 00:09:29.151 I/O Commands 00:09:29.151 ------------ 00:09:29.151 Flush (00h): Supported LBA-Change 00:09:29.151 Write (01h): Supported LBA-Change 00:09:29.151 Read (02h): Supported 00:09:29.151 Compare (05h): Supported 00:09:29.151 Write Zeroes (08h): Supported LBA-Change 00:09:29.151 Dataset Management (09h): Supported LBA-Change 00:09:29.151 Unknown (0Ch): Supported 00:09:29.151 Unknown (12h): Supported 00:09:29.151 Copy (19h): Supported LBA-Change 00:09:29.151 Unknown (1Dh): Supported LBA-Change 00:09:29.151 00:09:29.151 Error Log 00:09:29.151 ========= 00:09:29.151 00:09:29.151 Arbitration 00:09:29.151 =========== 00:09:29.151 Arbitration Burst: no limit 00:09:29.151 00:09:29.151 Power Management 00:09:29.151 ================ 00:09:29.151 Number of Power States: 1 00:09:29.151 Current Power State: Power State #0 00:09:29.151 Power State #0: 00:09:29.151 Max Power: 25.00 W 00:09:29.151 Non-Operational State: Operational 00:09:29.151 Entry Latency: 16 microseconds 00:09:29.151 Exit Latency: 4 microseconds 00:09:29.151 Relative Read Throughput: 0 00:09:29.151 Relative Read Latency: 0 00:09:29.151 Relative Write Throughput: 0 00:09:29.151 Relative Write Latency: 0 00:09:29.151 Idle Power: Not Reported 00:09:29.151 Active Power: Not Reported 00:09:29.151 Non-Operational Permissive Mode: Not Supported 00:09:29.151 00:09:29.151 Health Information 00:09:29.151 ================== 00:09:29.151 Critical Warnings: 00:09:29.151 Available Spare Space: OK 00:09:29.151 Temperature: OK 00:09:29.151 Device Reliability: OK 00:09:29.151 Read Only: No 00:09:29.151 Volatile Memory Backup: OK 00:09:29.151 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.151 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.151 Available Spare: 0% 00:09:29.151 Available Spare Threshold: 0% 00:09:29.151 Life Percentage Used: 0% 00:09:29.151 Data Units Read: 683 00:09:29.151 Data Units Written: 611 00:09:29.151 Host Read Commands: 31155 00:09:29.151 Host Write Commands: 30941 00:09:29.151 Controller Busy Time: 0 minutes 00:09:29.151 Power Cycles: 0 00:09:29.151 Power On Hours: 0 hours 00:09:29.151 Unsafe Shutdowns: 0 00:09:29.151 Unrecoverable Media Errors: 0 00:09:29.151 Lifetime Error Log Entries: 0 00:09:29.151 Warning Temperature Time: 0 minutes 00:09:29.151 Critical Temperature Time: 0 minutes 00:09:29.151 00:09:29.151 Number of Queues 00:09:29.151 ================ 00:09:29.151 Number of I/O Submission Queues: 64 00:09:29.151 Number of I/O Completion Queues: 64 00:09:29.151 00:09:29.151 ZNS Specific Controller Data 00:09:29.151 ============================ 00:09:29.151 Zone Append Size Limit: 0 00:09:29.151 00:09:29.151 00:09:29.151 Active Namespaces 00:09:29.151 ================= 00:09:29.151 Namespace ID:1 00:09:29.151 Error Recovery Timeout: Unlimited 00:09:29.151 Command Set Identifier: NVM (00h) 00:09:29.151 Deallocate: Supported 00:09:29.151 Deallocated/Unwritten Error: Supported 00:09:29.152 Deallocated Read Value: All 0x00 00:09:29.152 Deallocate in Write Zeroes: Not Supported 00:09:29.152 Deallocated Guard Field: 0xFFFF 00:09:29.152 Flush: Supported 00:09:29.152 Reservation: Not Supported 00:09:29.152 Metadata Transferred as: Separate Metadata Buffer 00:09:29.152 Namespace Sharing Capabilities: Private 00:09:29.152 Size (in LBAs): 1548666 (5GiB) 00:09:29.152 Capacity (in LBAs): 1548666 (5GiB) 00:09:29.152 Utilization (in LBAs): 1548666 (5GiB) 00:09:29.152 Thin Provisioning: Not Supported 00:09:29.152 Per-NS Atomic Units: No 00:09:29.152 Maximum Single Source Range Length: 128 00:09:29.152 Maximum Copy Length: 128 00:09:29.152 Maximum Source Range Count: 128 00:09:29.152 NGUID/EUI64 Never Reused: No 00:09:29.152 Namespace Write Protected: No 00:09:29.152 Number of LBA Formats: 8 00:09:29.152 Current LBA Format: LBA Format #07 00:09:29.152 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.152 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.152 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.152 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.152 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.152 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.152 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.152 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.152 00:09:29.152 NVM Specific Namespace Data 00:09:29.152 =========================== 00:09:29.152 Logical Block Storage Tag Mask: 0 00:09:29.152 Protection Information Capabilities: 00:09:29.152 16b Guard Protection Information Storage Tag Support: No 00:09:29.152 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.152 Storage Tag Check Read Support: No 00:09:29.152 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.152 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.152 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.152 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.152 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.152 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.152 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.152 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.152 19:27:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:29.152 19:27:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:29.414 ===================================================== 00:09:29.414 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:29.414 ===================================================== 00:09:29.414 Controller Capabilities/Features 00:09:29.414 ================================ 00:09:29.414 Vendor ID: 1b36 00:09:29.414 Subsystem Vendor ID: 1af4 00:09:29.414 Serial Number: 12341 00:09:29.414 Model Number: QEMU NVMe Ctrl 00:09:29.414 Firmware Version: 8.0.0 00:09:29.414 Recommended Arb Burst: 6 00:09:29.414 IEEE OUI Identifier: 00 54 52 00:09:29.414 Multi-path I/O 00:09:29.414 May have multiple subsystem ports: No 00:09:29.414 May have multiple controllers: No 00:09:29.414 Associated with SR-IOV VF: No 00:09:29.414 Max Data Transfer Size: 524288 00:09:29.414 Max Number of Namespaces: 256 00:09:29.414 Max Number of I/O Queues: 64 00:09:29.414 NVMe Specification Version (VS): 1.4 00:09:29.414 NVMe Specification Version (Identify): 1.4 00:09:29.414 Maximum Queue Entries: 2048 00:09:29.414 Contiguous Queues Required: Yes 00:09:29.414 Arbitration Mechanisms Supported 00:09:29.414 Weighted Round Robin: Not Supported 00:09:29.414 Vendor Specific: Not Supported 00:09:29.414 Reset Timeout: 7500 ms 00:09:29.414 Doorbell Stride: 4 bytes 00:09:29.415 NVM Subsystem Reset: Not Supported 00:09:29.415 Command Sets Supported 00:09:29.415 NVM Command Set: Supported 00:09:29.415 Boot Partition: Not Supported 00:09:29.415 Memory Page Size Minimum: 4096 bytes 00:09:29.415 Memory Page Size Maximum: 65536 bytes 00:09:29.415 Persistent Memory Region: Not Supported 00:09:29.415 Optional Asynchronous Events Supported 00:09:29.415 Namespace Attribute Notices: Supported 00:09:29.415 Firmware Activation Notices: Not Supported 00:09:29.415 ANA Change Notices: Not Supported 00:09:29.415 PLE Aggregate Log Change Notices: Not Supported 00:09:29.415 LBA Status Info Alert Notices: Not Supported 00:09:29.415 EGE Aggregate Log Change Notices: Not Supported 00:09:29.415 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.415 Zone Descriptor Change Notices: Not Supported 00:09:29.415 Discovery Log Change Notices: Not Supported 00:09:29.415 Controller Attributes 00:09:29.415 128-bit Host Identifier: Not Supported 00:09:29.415 Non-Operational Permissive Mode: Not Supported 00:09:29.415 NVM Sets: Not Supported 00:09:29.415 Read Recovery Levels: Not Supported 00:09:29.415 Endurance Groups: Not Supported 00:09:29.415 Predictable Latency Mode: Not Supported 00:09:29.415 Traffic Based Keep ALive: Not Supported 00:09:29.415 Namespace Granularity: Not Supported 00:09:29.415 SQ Associations: Not Supported 00:09:29.415 UUID List: Not Supported 00:09:29.415 Multi-Domain Subsystem: Not Supported 00:09:29.415 Fixed Capacity Management: Not Supported 00:09:29.415 Variable Capacity Management: Not Supported 00:09:29.415 Delete Endurance Group: Not Supported 00:09:29.415 Delete NVM Set: Not Supported 00:09:29.415 Extended LBA Formats Supported: Supported 00:09:29.415 Flexible Data Placement Supported: Not Supported 00:09:29.415 00:09:29.415 Controller Memory Buffer Support 00:09:29.415 ================================ 00:09:29.415 Supported: No 00:09:29.415 00:09:29.415 Persistent Memory Region Support 00:09:29.415 ================================ 00:09:29.415 Supported: No 00:09:29.415 00:09:29.415 Admin Command Set Attributes 00:09:29.415 ============================ 00:09:29.415 Security Send/Receive: Not Supported 00:09:29.415 Format NVM: Supported 00:09:29.415 Firmware Activate/Download: Not Supported 00:09:29.415 Namespace Management: Supported 00:09:29.415 Device Self-Test: Not Supported 00:09:29.415 Directives: Supported 00:09:29.415 NVMe-MI: Not Supported 00:09:29.415 Virtualization Management: Not Supported 00:09:29.415 Doorbell Buffer Config: Supported 00:09:29.415 Get LBA Status Capability: Not Supported 00:09:29.415 Command & Feature Lockdown Capability: Not Supported 00:09:29.415 Abort Command Limit: 4 00:09:29.415 Async Event Request Limit: 4 00:09:29.415 Number of Firmware Slots: N/A 00:09:29.415 Firmware Slot 1 Read-Only: N/A 00:09:29.415 Firmware Activation Without Reset: N/A 00:09:29.415 Multiple Update Detection Support: N/A 00:09:29.415 Firmware Update Granularity: No Information Provided 00:09:29.415 Per-Namespace SMART Log: Yes 00:09:29.415 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.415 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:29.415 Command Effects Log Page: Supported 00:09:29.415 Get Log Page Extended Data: Supported 00:09:29.415 Telemetry Log Pages: Not Supported 00:09:29.415 Persistent Event Log Pages: Not Supported 00:09:29.415 Supported Log Pages Log Page: May Support 00:09:29.415 Commands Supported & Effects Log Page: Not Supported 00:09:29.415 Feature Identifiers & Effects Log Page:May Support 00:09:29.415 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.415 Data Area 4 for Telemetry Log: Not Supported 00:09:29.415 Error Log Page Entries Supported: 1 00:09:29.415 Keep Alive: Not Supported 00:09:29.415 00:09:29.415 NVM Command Set Attributes 00:09:29.415 ========================== 00:09:29.415 Submission Queue Entry Size 00:09:29.415 Max: 64 00:09:29.415 Min: 64 00:09:29.415 Completion Queue Entry Size 00:09:29.415 Max: 16 00:09:29.415 Min: 16 00:09:29.415 Number of Namespaces: 256 00:09:29.415 Compare Command: Supported 00:09:29.415 Write Uncorrectable Command: Not Supported 00:09:29.415 Dataset Management Command: Supported 00:09:29.415 Write Zeroes Command: Supported 00:09:29.415 Set Features Save Field: Supported 00:09:29.415 Reservations: Not Supported 00:09:29.415 Timestamp: Supported 00:09:29.415 Copy: Supported 00:09:29.415 Volatile Write Cache: Present 00:09:29.415 Atomic Write Unit (Normal): 1 00:09:29.415 Atomic Write Unit (PFail): 1 00:09:29.415 Atomic Compare & Write Unit: 1 00:09:29.415 Fused Compare & Write: Not Supported 00:09:29.415 Scatter-Gather List 00:09:29.415 SGL Command Set: Supported 00:09:29.415 SGL Keyed: Not Supported 00:09:29.415 SGL Bit Bucket Descriptor: Not Supported 00:09:29.415 SGL Metadata Pointer: Not Supported 00:09:29.415 Oversized SGL: Not Supported 00:09:29.415 SGL Metadata Address: Not Supported 00:09:29.415 SGL Offset: Not Supported 00:09:29.415 Transport SGL Data Block: Not Supported 00:09:29.415 Replay Protected Memory Block: Not Supported 00:09:29.415 00:09:29.415 Firmware Slot Information 00:09:29.415 ========================= 00:09:29.415 Active slot: 1 00:09:29.415 Slot 1 Firmware Revision: 1.0 00:09:29.415 00:09:29.415 00:09:29.415 Commands Supported and Effects 00:09:29.415 ============================== 00:09:29.415 Admin Commands 00:09:29.415 -------------- 00:09:29.415 Delete I/O Submission Queue (00h): Supported 00:09:29.415 Create I/O Submission Queue (01h): Supported 00:09:29.415 Get Log Page (02h): Supported 00:09:29.415 Delete I/O Completion Queue (04h): Supported 00:09:29.415 Create I/O Completion Queue (05h): Supported 00:09:29.415 Identify (06h): Supported 00:09:29.415 Abort (08h): Supported 00:09:29.415 Set Features (09h): Supported 00:09:29.415 Get Features (0Ah): Supported 00:09:29.415 Asynchronous Event Request (0Ch): Supported 00:09:29.415 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.415 Directive Send (19h): Supported 00:09:29.415 Directive Receive (1Ah): Supported 00:09:29.415 Virtualization Management (1Ch): Supported 00:09:29.415 Doorbell Buffer Config (7Ch): Supported 00:09:29.415 Format NVM (80h): Supported LBA-Change 00:09:29.415 I/O Commands 00:09:29.415 ------------ 00:09:29.415 Flush (00h): Supported LBA-Change 00:09:29.415 Write (01h): Supported LBA-Change 00:09:29.415 Read (02h): Supported 00:09:29.415 Compare (05h): Supported 00:09:29.415 Write Zeroes (08h): Supported LBA-Change 00:09:29.415 Dataset Management (09h): Supported LBA-Change 00:09:29.415 Unknown (0Ch): Supported 00:09:29.415 Unknown (12h): Supported 00:09:29.415 Copy (19h): Supported LBA-Change 00:09:29.415 Unknown (1Dh): Supported LBA-Change 00:09:29.415 00:09:29.415 Error Log 00:09:29.415 ========= 00:09:29.415 00:09:29.415 Arbitration 00:09:29.415 =========== 00:09:29.415 Arbitration Burst: no limit 00:09:29.415 00:09:29.415 Power Management 00:09:29.415 ================ 00:09:29.415 Number of Power States: 1 00:09:29.415 Current Power State: Power State #0 00:09:29.415 Power State #0: 00:09:29.415 Max Power: 25.00 W 00:09:29.415 Non-Operational State: Operational 00:09:29.415 Entry Latency: 16 microseconds 00:09:29.415 Exit Latency: 4 microseconds 00:09:29.415 Relative Read Throughput: 0 00:09:29.415 Relative Read Latency: 0 00:09:29.415 Relative Write Throughput: 0 00:09:29.415 Relative Write Latency: 0 00:09:29.415 Idle Power: Not Reported 00:09:29.415 Active Power: Not Reported 00:09:29.415 Non-Operational Permissive Mode: Not Supported 00:09:29.415 00:09:29.415 Health Information 00:09:29.415 ================== 00:09:29.415 Critical Warnings: 00:09:29.415 Available Spare Space: OK 00:09:29.415 Temperature: OK 00:09:29.415 Device Reliability: OK 00:09:29.415 Read Only: No 00:09:29.415 Volatile Memory Backup: OK 00:09:29.415 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.415 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.415 Available Spare: 0% 00:09:29.415 Available Spare Threshold: 0% 00:09:29.415 Life Percentage Used: 0% 00:09:29.415 Data Units Read: 1002 00:09:29.415 Data Units Written: 875 00:09:29.415 Host Read Commands: 46478 00:09:29.415 Host Write Commands: 45367 00:09:29.415 Controller Busy Time: 0 minutes 00:09:29.415 Power Cycles: 0 00:09:29.415 Power On Hours: 0 hours 00:09:29.415 Unsafe Shutdowns: 0 00:09:29.415 Unrecoverable Media Errors: 0 00:09:29.415 Lifetime Error Log Entries: 0 00:09:29.415 Warning Temperature Time: 0 minutes 00:09:29.415 Critical Temperature Time: 0 minutes 00:09:29.415 00:09:29.415 Number of Queues 00:09:29.415 ================ 00:09:29.415 Number of I/O Submission Queues: 64 00:09:29.416 Number of I/O Completion Queues: 64 00:09:29.416 00:09:29.416 ZNS Specific Controller Data 00:09:29.416 ============================ 00:09:29.416 Zone Append Size Limit: 0 00:09:29.416 00:09:29.416 00:09:29.416 Active Namespaces 00:09:29.416 ================= 00:09:29.416 Namespace ID:1 00:09:29.416 Error Recovery Timeout: Unlimited 00:09:29.416 Command Set Identifier: NVM (00h) 00:09:29.416 Deallocate: Supported 00:09:29.416 Deallocated/Unwritten Error: Supported 00:09:29.416 Deallocated Read Value: All 0x00 00:09:29.416 Deallocate in Write Zeroes: Not Supported 00:09:29.416 Deallocated Guard Field: 0xFFFF 00:09:29.416 Flush: Supported 00:09:29.416 Reservation: Not Supported 00:09:29.416 Namespace Sharing Capabilities: Private 00:09:29.416 Size (in LBAs): 1310720 (5GiB) 00:09:29.416 Capacity (in LBAs): 1310720 (5GiB) 00:09:29.416 Utilization (in LBAs): 1310720 (5GiB) 00:09:29.416 Thin Provisioning: Not Supported 00:09:29.416 Per-NS Atomic Units: No 00:09:29.416 Maximum Single Source Range Length: 128 00:09:29.416 Maximum Copy Length: 128 00:09:29.416 Maximum Source Range Count: 128 00:09:29.416 NGUID/EUI64 Never Reused: No 00:09:29.416 Namespace Write Protected: No 00:09:29.416 Number of LBA Formats: 8 00:09:29.416 Current LBA Format: LBA Format #04 00:09:29.416 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.416 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.416 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.416 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.416 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.416 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.416 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.416 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.416 00:09:29.416 NVM Specific Namespace Data 00:09:29.416 =========================== 00:09:29.416 Logical Block Storage Tag Mask: 0 00:09:29.416 Protection Information Capabilities: 00:09:29.416 16b Guard Protection Information Storage Tag Support: No 00:09:29.416 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.416 Storage Tag Check Read Support: No 00:09:29.416 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.416 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.416 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.416 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.416 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.416 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.416 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.416 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.416 19:27:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:29.416 19:27:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:29.676 ===================================================== 00:09:29.676 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:29.676 ===================================================== 00:09:29.676 Controller Capabilities/Features 00:09:29.676 ================================ 00:09:29.676 Vendor ID: 1b36 00:09:29.676 Subsystem Vendor ID: 1af4 00:09:29.676 Serial Number: 12342 00:09:29.676 Model Number: QEMU NVMe Ctrl 00:09:29.676 Firmware Version: 8.0.0 00:09:29.676 Recommended Arb Burst: 6 00:09:29.676 IEEE OUI Identifier: 00 54 52 00:09:29.676 Multi-path I/O 00:09:29.676 May have multiple subsystem ports: No 00:09:29.676 May have multiple controllers: No 00:09:29.676 Associated with SR-IOV VF: No 00:09:29.676 Max Data Transfer Size: 524288 00:09:29.676 Max Number of Namespaces: 256 00:09:29.676 Max Number of I/O Queues: 64 00:09:29.676 NVMe Specification Version (VS): 1.4 00:09:29.676 NVMe Specification Version (Identify): 1.4 00:09:29.676 Maximum Queue Entries: 2048 00:09:29.676 Contiguous Queues Required: Yes 00:09:29.676 Arbitration Mechanisms Supported 00:09:29.676 Weighted Round Robin: Not Supported 00:09:29.676 Vendor Specific: Not Supported 00:09:29.676 Reset Timeout: 7500 ms 00:09:29.676 Doorbell Stride: 4 bytes 00:09:29.676 NVM Subsystem Reset: Not Supported 00:09:29.676 Command Sets Supported 00:09:29.676 NVM Command Set: Supported 00:09:29.676 Boot Partition: Not Supported 00:09:29.676 Memory Page Size Minimum: 4096 bytes 00:09:29.676 Memory Page Size Maximum: 65536 bytes 00:09:29.676 Persistent Memory Region: Not Supported 00:09:29.676 Optional Asynchronous Events Supported 00:09:29.676 Namespace Attribute Notices: Supported 00:09:29.676 Firmware Activation Notices: Not Supported 00:09:29.676 ANA Change Notices: Not Supported 00:09:29.676 PLE Aggregate Log Change Notices: Not Supported 00:09:29.676 LBA Status Info Alert Notices: Not Supported 00:09:29.676 EGE Aggregate Log Change Notices: Not Supported 00:09:29.676 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.676 Zone Descriptor Change Notices: Not Supported 00:09:29.676 Discovery Log Change Notices: Not Supported 00:09:29.676 Controller Attributes 00:09:29.676 128-bit Host Identifier: Not Supported 00:09:29.676 Non-Operational Permissive Mode: Not Supported 00:09:29.676 NVM Sets: Not Supported 00:09:29.676 Read Recovery Levels: Not Supported 00:09:29.676 Endurance Groups: Not Supported 00:09:29.676 Predictable Latency Mode: Not Supported 00:09:29.676 Traffic Based Keep ALive: Not Supported 00:09:29.676 Namespace Granularity: Not Supported 00:09:29.676 SQ Associations: Not Supported 00:09:29.676 UUID List: Not Supported 00:09:29.676 Multi-Domain Subsystem: Not Supported 00:09:29.676 Fixed Capacity Management: Not Supported 00:09:29.676 Variable Capacity Management: Not Supported 00:09:29.676 Delete Endurance Group: Not Supported 00:09:29.676 Delete NVM Set: Not Supported 00:09:29.676 Extended LBA Formats Supported: Supported 00:09:29.676 Flexible Data Placement Supported: Not Supported 00:09:29.676 00:09:29.676 Controller Memory Buffer Support 00:09:29.676 ================================ 00:09:29.676 Supported: No 00:09:29.676 00:09:29.676 Persistent Memory Region Support 00:09:29.676 ================================ 00:09:29.676 Supported: No 00:09:29.676 00:09:29.676 Admin Command Set Attributes 00:09:29.676 ============================ 00:09:29.676 Security Send/Receive: Not Supported 00:09:29.676 Format NVM: Supported 00:09:29.676 Firmware Activate/Download: Not Supported 00:09:29.676 Namespace Management: Supported 00:09:29.676 Device Self-Test: Not Supported 00:09:29.676 Directives: Supported 00:09:29.676 NVMe-MI: Not Supported 00:09:29.676 Virtualization Management: Not Supported 00:09:29.676 Doorbell Buffer Config: Supported 00:09:29.676 Get LBA Status Capability: Not Supported 00:09:29.676 Command & Feature Lockdown Capability: Not Supported 00:09:29.676 Abort Command Limit: 4 00:09:29.676 Async Event Request Limit: 4 00:09:29.676 Number of Firmware Slots: N/A 00:09:29.676 Firmware Slot 1 Read-Only: N/A 00:09:29.676 Firmware Activation Without Reset: N/A 00:09:29.676 Multiple Update Detection Support: N/A 00:09:29.676 Firmware Update Granularity: No Information Provided 00:09:29.676 Per-Namespace SMART Log: Yes 00:09:29.676 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.676 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:29.676 Command Effects Log Page: Supported 00:09:29.676 Get Log Page Extended Data: Supported 00:09:29.676 Telemetry Log Pages: Not Supported 00:09:29.676 Persistent Event Log Pages: Not Supported 00:09:29.676 Supported Log Pages Log Page: May Support 00:09:29.676 Commands Supported & Effects Log Page: Not Supported 00:09:29.676 Feature Identifiers & Effects Log Page:May Support 00:09:29.676 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.676 Data Area 4 for Telemetry Log: Not Supported 00:09:29.677 Error Log Page Entries Supported: 1 00:09:29.677 Keep Alive: Not Supported 00:09:29.677 00:09:29.677 NVM Command Set Attributes 00:09:29.677 ========================== 00:09:29.677 Submission Queue Entry Size 00:09:29.677 Max: 64 00:09:29.677 Min: 64 00:09:29.677 Completion Queue Entry Size 00:09:29.677 Max: 16 00:09:29.677 Min: 16 00:09:29.677 Number of Namespaces: 256 00:09:29.677 Compare Command: Supported 00:09:29.677 Write Uncorrectable Command: Not Supported 00:09:29.677 Dataset Management Command: Supported 00:09:29.677 Write Zeroes Command: Supported 00:09:29.677 Set Features Save Field: Supported 00:09:29.677 Reservations: Not Supported 00:09:29.677 Timestamp: Supported 00:09:29.677 Copy: Supported 00:09:29.677 Volatile Write Cache: Present 00:09:29.677 Atomic Write Unit (Normal): 1 00:09:29.677 Atomic Write Unit (PFail): 1 00:09:29.677 Atomic Compare & Write Unit: 1 00:09:29.677 Fused Compare & Write: Not Supported 00:09:29.677 Scatter-Gather List 00:09:29.677 SGL Command Set: Supported 00:09:29.677 SGL Keyed: Not Supported 00:09:29.677 SGL Bit Bucket Descriptor: Not Supported 00:09:29.677 SGL Metadata Pointer: Not Supported 00:09:29.677 Oversized SGL: Not Supported 00:09:29.677 SGL Metadata Address: Not Supported 00:09:29.677 SGL Offset: Not Supported 00:09:29.677 Transport SGL Data Block: Not Supported 00:09:29.677 Replay Protected Memory Block: Not Supported 00:09:29.677 00:09:29.677 Firmware Slot Information 00:09:29.677 ========================= 00:09:29.677 Active slot: 1 00:09:29.677 Slot 1 Firmware Revision: 1.0 00:09:29.677 00:09:29.677 00:09:29.677 Commands Supported and Effects 00:09:29.677 ============================== 00:09:29.677 Admin Commands 00:09:29.677 -------------- 00:09:29.677 Delete I/O Submission Queue (00h): Supported 00:09:29.677 Create I/O Submission Queue (01h): Supported 00:09:29.677 Get Log Page (02h): Supported 00:09:29.677 Delete I/O Completion Queue (04h): Supported 00:09:29.677 Create I/O Completion Queue (05h): Supported 00:09:29.677 Identify (06h): Supported 00:09:29.677 Abort (08h): Supported 00:09:29.677 Set Features (09h): Supported 00:09:29.677 Get Features (0Ah): Supported 00:09:29.677 Asynchronous Event Request (0Ch): Supported 00:09:29.677 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.677 Directive Send (19h): Supported 00:09:29.677 Directive Receive (1Ah): Supported 00:09:29.677 Virtualization Management (1Ch): Supported 00:09:29.677 Doorbell Buffer Config (7Ch): Supported 00:09:29.677 Format NVM (80h): Supported LBA-Change 00:09:29.677 I/O Commands 00:09:29.677 ------------ 00:09:29.677 Flush (00h): Supported LBA-Change 00:09:29.677 Write (01h): Supported LBA-Change 00:09:29.677 Read (02h): Supported 00:09:29.677 Compare (05h): Supported 00:09:29.677 Write Zeroes (08h): Supported LBA-Change 00:09:29.677 Dataset Management (09h): Supported LBA-Change 00:09:29.677 Unknown (0Ch): Supported 00:09:29.677 Unknown (12h): Supported 00:09:29.677 Copy (19h): Supported LBA-Change 00:09:29.677 Unknown (1Dh): Supported LBA-Change 00:09:29.677 00:09:29.677 Error Log 00:09:29.677 ========= 00:09:29.677 00:09:29.677 Arbitration 00:09:29.677 =========== 00:09:29.677 Arbitration Burst: no limit 00:09:29.677 00:09:29.677 Power Management 00:09:29.677 ================ 00:09:29.677 Number of Power States: 1 00:09:29.677 Current Power State: Power State #0 00:09:29.677 Power State #0: 00:09:29.677 Max Power: 25.00 W 00:09:29.677 Non-Operational State: Operational 00:09:29.677 Entry Latency: 16 microseconds 00:09:29.677 Exit Latency: 4 microseconds 00:09:29.677 Relative Read Throughput: 0 00:09:29.677 Relative Read Latency: 0 00:09:29.677 Relative Write Throughput: 0 00:09:29.677 Relative Write Latency: 0 00:09:29.677 Idle Power: Not Reported 00:09:29.677 Active Power: Not Reported 00:09:29.677 Non-Operational Permissive Mode: Not Supported 00:09:29.677 00:09:29.677 Health Information 00:09:29.677 ================== 00:09:29.677 Critical Warnings: 00:09:29.677 Available Spare Space: OK 00:09:29.677 Temperature: OK 00:09:29.677 Device Reliability: OK 00:09:29.677 Read Only: No 00:09:29.677 Volatile Memory Backup: OK 00:09:29.677 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.677 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.677 Available Spare: 0% 00:09:29.677 Available Spare Threshold: 0% 00:09:29.677 Life Percentage Used: 0% 00:09:29.677 Data Units Read: 2096 00:09:29.677 Data Units Written: 1884 00:09:29.677 Host Read Commands: 94258 00:09:29.677 Host Write Commands: 92527 00:09:29.677 Controller Busy Time: 0 minutes 00:09:29.677 Power Cycles: 0 00:09:29.677 Power On Hours: 0 hours 00:09:29.677 Unsafe Shutdowns: 0 00:09:29.677 Unrecoverable Media Errors: 0 00:09:29.677 Lifetime Error Log Entries: 0 00:09:29.677 Warning Temperature Time: 0 minutes 00:09:29.677 Critical Temperature Time: 0 minutes 00:09:29.677 00:09:29.677 Number of Queues 00:09:29.677 ================ 00:09:29.677 Number of I/O Submission Queues: 64 00:09:29.677 Number of I/O Completion Queues: 64 00:09:29.677 00:09:29.677 ZNS Specific Controller Data 00:09:29.677 ============================ 00:09:29.677 Zone Append Size Limit: 0 00:09:29.677 00:09:29.677 00:09:29.677 Active Namespaces 00:09:29.677 ================= 00:09:29.677 Namespace ID:1 00:09:29.677 Error Recovery Timeout: Unlimited 00:09:29.677 Command Set Identifier: NVM (00h) 00:09:29.677 Deallocate: Supported 00:09:29.677 Deallocated/Unwritten Error: Supported 00:09:29.677 Deallocated Read Value: All 0x00 00:09:29.677 Deallocate in Write Zeroes: Not Supported 00:09:29.677 Deallocated Guard Field: 0xFFFF 00:09:29.677 Flush: Supported 00:09:29.677 Reservation: Not Supported 00:09:29.677 Namespace Sharing Capabilities: Private 00:09:29.677 Size (in LBAs): 1048576 (4GiB) 00:09:29.677 Capacity (in LBAs): 1048576 (4GiB) 00:09:29.677 Utilization (in LBAs): 1048576 (4GiB) 00:09:29.677 Thin Provisioning: Not Supported 00:09:29.677 Per-NS Atomic Units: No 00:09:29.677 Maximum Single Source Range Length: 128 00:09:29.677 Maximum Copy Length: 128 00:09:29.677 Maximum Source Range Count: 128 00:09:29.677 NGUID/EUI64 Never Reused: No 00:09:29.677 Namespace Write Protected: No 00:09:29.677 Number of LBA Formats: 8 00:09:29.677 Current LBA Format: LBA Format #04 00:09:29.677 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.677 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.677 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.677 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.677 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.677 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.677 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.677 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.677 00:09:29.677 NVM Specific Namespace Data 00:09:29.677 =========================== 00:09:29.677 Logical Block Storage Tag Mask: 0 00:09:29.677 Protection Information Capabilities: 00:09:29.677 16b Guard Protection Information Storage Tag Support: No 00:09:29.677 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.677 Storage Tag Check Read Support: No 00:09:29.677 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.677 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.677 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.677 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.677 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.677 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.677 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.677 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.677 Namespace ID:2 00:09:29.677 Error Recovery Timeout: Unlimited 00:09:29.677 Command Set Identifier: NVM (00h) 00:09:29.677 Deallocate: Supported 00:09:29.677 Deallocated/Unwritten Error: Supported 00:09:29.677 Deallocated Read Value: All 0x00 00:09:29.677 Deallocate in Write Zeroes: Not Supported 00:09:29.677 Deallocated Guard Field: 0xFFFF 00:09:29.677 Flush: Supported 00:09:29.677 Reservation: Not Supported 00:09:29.677 Namespace Sharing Capabilities: Private 00:09:29.677 Size (in LBAs): 1048576 (4GiB) 00:09:29.677 Capacity (in LBAs): 1048576 (4GiB) 00:09:29.677 Utilization (in LBAs): 1048576 (4GiB) 00:09:29.677 Thin Provisioning: Not Supported 00:09:29.677 Per-NS Atomic Units: No 00:09:29.677 Maximum Single Source Range Length: 128 00:09:29.677 Maximum Copy Length: 128 00:09:29.677 Maximum Source Range Count: 128 00:09:29.677 NGUID/EUI64 Never Reused: No 00:09:29.677 Namespace Write Protected: No 00:09:29.677 Number of LBA Formats: 8 00:09:29.678 Current LBA Format: LBA Format #04 00:09:29.678 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.678 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.678 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.678 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.678 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.678 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.678 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.678 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.678 00:09:29.678 NVM Specific Namespace Data 00:09:29.678 =========================== 00:09:29.678 Logical Block Storage Tag Mask: 0 00:09:29.678 Protection Information Capabilities: 00:09:29.678 16b Guard Protection Information Storage Tag Support: No 00:09:29.678 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.678 Storage Tag Check Read Support: No 00:09:29.678 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Namespace ID:3 00:09:29.678 Error Recovery Timeout: Unlimited 00:09:29.678 Command Set Identifier: NVM (00h) 00:09:29.678 Deallocate: Supported 00:09:29.678 Deallocated/Unwritten Error: Supported 00:09:29.678 Deallocated Read Value: All 0x00 00:09:29.678 Deallocate in Write Zeroes: Not Supported 00:09:29.678 Deallocated Guard Field: 0xFFFF 00:09:29.678 Flush: Supported 00:09:29.678 Reservation: Not Supported 00:09:29.678 Namespace Sharing Capabilities: Private 00:09:29.678 Size (in LBAs): 1048576 (4GiB) 00:09:29.678 Capacity (in LBAs): 1048576 (4GiB) 00:09:29.678 Utilization (in LBAs): 1048576 (4GiB) 00:09:29.678 Thin Provisioning: Not Supported 00:09:29.678 Per-NS Atomic Units: No 00:09:29.678 Maximum Single Source Range Length: 128 00:09:29.678 Maximum Copy Length: 128 00:09:29.678 Maximum Source Range Count: 128 00:09:29.678 NGUID/EUI64 Never Reused: No 00:09:29.678 Namespace Write Protected: No 00:09:29.678 Number of LBA Formats: 8 00:09:29.678 Current LBA Format: LBA Format #04 00:09:29.678 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.678 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.678 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.678 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.678 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.678 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.678 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.678 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.678 00:09:29.678 NVM Specific Namespace Data 00:09:29.678 =========================== 00:09:29.678 Logical Block Storage Tag Mask: 0 00:09:29.678 Protection Information Capabilities: 00:09:29.678 16b Guard Protection Information Storage Tag Support: No 00:09:29.678 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.678 Storage Tag Check Read Support: No 00:09:29.678 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.678 19:27:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:29.678 19:27:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:29.940 ===================================================== 00:09:29.940 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:29.940 ===================================================== 00:09:29.940 Controller Capabilities/Features 00:09:29.940 ================================ 00:09:29.940 Vendor ID: 1b36 00:09:29.940 Subsystem Vendor ID: 1af4 00:09:29.940 Serial Number: 12343 00:09:29.940 Model Number: QEMU NVMe Ctrl 00:09:29.940 Firmware Version: 8.0.0 00:09:29.940 Recommended Arb Burst: 6 00:09:29.940 IEEE OUI Identifier: 00 54 52 00:09:29.940 Multi-path I/O 00:09:29.940 May have multiple subsystem ports: No 00:09:29.940 May have multiple controllers: Yes 00:09:29.940 Associated with SR-IOV VF: No 00:09:29.940 Max Data Transfer Size: 524288 00:09:29.940 Max Number of Namespaces: 256 00:09:29.940 Max Number of I/O Queues: 64 00:09:29.940 NVMe Specification Version (VS): 1.4 00:09:29.940 NVMe Specification Version (Identify): 1.4 00:09:29.940 Maximum Queue Entries: 2048 00:09:29.940 Contiguous Queues Required: Yes 00:09:29.940 Arbitration Mechanisms Supported 00:09:29.940 Weighted Round Robin: Not Supported 00:09:29.940 Vendor Specific: Not Supported 00:09:29.940 Reset Timeout: 7500 ms 00:09:29.940 Doorbell Stride: 4 bytes 00:09:29.940 NVM Subsystem Reset: Not Supported 00:09:29.940 Command Sets Supported 00:09:29.940 NVM Command Set: Supported 00:09:29.940 Boot Partition: Not Supported 00:09:29.940 Memory Page Size Minimum: 4096 bytes 00:09:29.940 Memory Page Size Maximum: 65536 bytes 00:09:29.940 Persistent Memory Region: Not Supported 00:09:29.940 Optional Asynchronous Events Supported 00:09:29.940 Namespace Attribute Notices: Supported 00:09:29.940 Firmware Activation Notices: Not Supported 00:09:29.940 ANA Change Notices: Not Supported 00:09:29.940 PLE Aggregate Log Change Notices: Not Supported 00:09:29.940 LBA Status Info Alert Notices: Not Supported 00:09:29.940 EGE Aggregate Log Change Notices: Not Supported 00:09:29.940 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.940 Zone Descriptor Change Notices: Not Supported 00:09:29.940 Discovery Log Change Notices: Not Supported 00:09:29.940 Controller Attributes 00:09:29.940 128-bit Host Identifier: Not Supported 00:09:29.940 Non-Operational Permissive Mode: Not Supported 00:09:29.940 NVM Sets: Not Supported 00:09:29.940 Read Recovery Levels: Not Supported 00:09:29.940 Endurance Groups: Supported 00:09:29.940 Predictable Latency Mode: Not Supported 00:09:29.940 Traffic Based Keep ALive: Not Supported 00:09:29.940 Namespace Granularity: Not Supported 00:09:29.940 SQ Associations: Not Supported 00:09:29.940 UUID List: Not Supported 00:09:29.940 Multi-Domain Subsystem: Not Supported 00:09:29.940 Fixed Capacity Management: Not Supported 00:09:29.940 Variable Capacity Management: Not Supported 00:09:29.940 Delete Endurance Group: Not Supported 00:09:29.940 Delete NVM Set: Not Supported 00:09:29.940 Extended LBA Formats Supported: Supported 00:09:29.940 Flexible Data Placement Supported: Supported 00:09:29.940 00:09:29.940 Controller Memory Buffer Support 00:09:29.940 ================================ 00:09:29.940 Supported: No 00:09:29.940 00:09:29.940 Persistent Memory Region Support 00:09:29.940 ================================ 00:09:29.940 Supported: No 00:09:29.940 00:09:29.940 Admin Command Set Attributes 00:09:29.940 ============================ 00:09:29.940 Security Send/Receive: Not Supported 00:09:29.940 Format NVM: Supported 00:09:29.940 Firmware Activate/Download: Not Supported 00:09:29.940 Namespace Management: Supported 00:09:29.940 Device Self-Test: Not Supported 00:09:29.940 Directives: Supported 00:09:29.941 NVMe-MI: Not Supported 00:09:29.941 Virtualization Management: Not Supported 00:09:29.941 Doorbell Buffer Config: Supported 00:09:29.941 Get LBA Status Capability: Not Supported 00:09:29.941 Command & Feature Lockdown Capability: Not Supported 00:09:29.941 Abort Command Limit: 4 00:09:29.941 Async Event Request Limit: 4 00:09:29.941 Number of Firmware Slots: N/A 00:09:29.941 Firmware Slot 1 Read-Only: N/A 00:09:29.941 Firmware Activation Without Reset: N/A 00:09:29.941 Multiple Update Detection Support: N/A 00:09:29.941 Firmware Update Granularity: No Information Provided 00:09:29.941 Per-Namespace SMART Log: Yes 00:09:29.941 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.941 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:29.941 Command Effects Log Page: Supported 00:09:29.941 Get Log Page Extended Data: Supported 00:09:29.941 Telemetry Log Pages: Not Supported 00:09:29.941 Persistent Event Log Pages: Not Supported 00:09:29.941 Supported Log Pages Log Page: May Support 00:09:29.941 Commands Supported & Effects Log Page: Not Supported 00:09:29.941 Feature Identifiers & Effects Log Page:May Support 00:09:29.941 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.941 Data Area 4 for Telemetry Log: Not Supported 00:09:29.941 Error Log Page Entries Supported: 1 00:09:29.941 Keep Alive: Not Supported 00:09:29.941 00:09:29.941 NVM Command Set Attributes 00:09:29.941 ========================== 00:09:29.941 Submission Queue Entry Size 00:09:29.941 Max: 64 00:09:29.941 Min: 64 00:09:29.941 Completion Queue Entry Size 00:09:29.941 Max: 16 00:09:29.941 Min: 16 00:09:29.941 Number of Namespaces: 256 00:09:29.941 Compare Command: Supported 00:09:29.941 Write Uncorrectable Command: Not Supported 00:09:29.941 Dataset Management Command: Supported 00:09:29.941 Write Zeroes Command: Supported 00:09:29.941 Set Features Save Field: Supported 00:09:29.941 Reservations: Not Supported 00:09:29.941 Timestamp: Supported 00:09:29.941 Copy: Supported 00:09:29.941 Volatile Write Cache: Present 00:09:29.941 Atomic Write Unit (Normal): 1 00:09:29.941 Atomic Write Unit (PFail): 1 00:09:29.941 Atomic Compare & Write Unit: 1 00:09:29.941 Fused Compare & Write: Not Supported 00:09:29.941 Scatter-Gather List 00:09:29.941 SGL Command Set: Supported 00:09:29.941 SGL Keyed: Not Supported 00:09:29.941 SGL Bit Bucket Descriptor: Not Supported 00:09:29.941 SGL Metadata Pointer: Not Supported 00:09:29.941 Oversized SGL: Not Supported 00:09:29.941 SGL Metadata Address: Not Supported 00:09:29.941 SGL Offset: Not Supported 00:09:29.941 Transport SGL Data Block: Not Supported 00:09:29.941 Replay Protected Memory Block: Not Supported 00:09:29.941 00:09:29.941 Firmware Slot Information 00:09:29.941 ========================= 00:09:29.941 Active slot: 1 00:09:29.941 Slot 1 Firmware Revision: 1.0 00:09:29.941 00:09:29.941 00:09:29.941 Commands Supported and Effects 00:09:29.941 ============================== 00:09:29.941 Admin Commands 00:09:29.941 -------------- 00:09:29.941 Delete I/O Submission Queue (00h): Supported 00:09:29.941 Create I/O Submission Queue (01h): Supported 00:09:29.941 Get Log Page (02h): Supported 00:09:29.941 Delete I/O Completion Queue (04h): Supported 00:09:29.941 Create I/O Completion Queue (05h): Supported 00:09:29.941 Identify (06h): Supported 00:09:29.941 Abort (08h): Supported 00:09:29.941 Set Features (09h): Supported 00:09:29.941 Get Features (0Ah): Supported 00:09:29.941 Asynchronous Event Request (0Ch): Supported 00:09:29.941 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.941 Directive Send (19h): Supported 00:09:29.941 Directive Receive (1Ah): Supported 00:09:29.941 Virtualization Management (1Ch): Supported 00:09:29.941 Doorbell Buffer Config (7Ch): Supported 00:09:29.941 Format NVM (80h): Supported LBA-Change 00:09:29.941 I/O Commands 00:09:29.941 ------------ 00:09:29.941 Flush (00h): Supported LBA-Change 00:09:29.941 Write (01h): Supported LBA-Change 00:09:29.941 Read (02h): Supported 00:09:29.941 Compare (05h): Supported 00:09:29.941 Write Zeroes (08h): Supported LBA-Change 00:09:29.941 Dataset Management (09h): Supported LBA-Change 00:09:29.941 Unknown (0Ch): Supported 00:09:29.941 Unknown (12h): Supported 00:09:29.941 Copy (19h): Supported LBA-Change 00:09:29.941 Unknown (1Dh): Supported LBA-Change 00:09:29.941 00:09:29.941 Error Log 00:09:29.941 ========= 00:09:29.941 00:09:29.941 Arbitration 00:09:29.941 =========== 00:09:29.941 Arbitration Burst: no limit 00:09:29.941 00:09:29.941 Power Management 00:09:29.941 ================ 00:09:29.941 Number of Power States: 1 00:09:29.941 Current Power State: Power State #0 00:09:29.941 Power State #0: 00:09:29.941 Max Power: 25.00 W 00:09:29.941 Non-Operational State: Operational 00:09:29.941 Entry Latency: 16 microseconds 00:09:29.941 Exit Latency: 4 microseconds 00:09:29.941 Relative Read Throughput: 0 00:09:29.941 Relative Read Latency: 0 00:09:29.941 Relative Write Throughput: 0 00:09:29.941 Relative Write Latency: 0 00:09:29.941 Idle Power: Not Reported 00:09:29.941 Active Power: Not Reported 00:09:29.941 Non-Operational Permissive Mode: Not Supported 00:09:29.941 00:09:29.941 Health Information 00:09:29.941 ================== 00:09:29.941 Critical Warnings: 00:09:29.941 Available Spare Space: OK 00:09:29.941 Temperature: OK 00:09:29.941 Device Reliability: OK 00:09:29.941 Read Only: No 00:09:29.941 Volatile Memory Backup: OK 00:09:29.941 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.941 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.941 Available Spare: 0% 00:09:29.941 Available Spare Threshold: 0% 00:09:29.941 Life Percentage Used: 0% 00:09:29.941 Data Units Read: 749 00:09:29.941 Data Units Written: 678 00:09:29.941 Host Read Commands: 31911 00:09:29.941 Host Write Commands: 31334 00:09:29.941 Controller Busy Time: 0 minutes 00:09:29.941 Power Cycles: 0 00:09:29.941 Power On Hours: 0 hours 00:09:29.941 Unsafe Shutdowns: 0 00:09:29.941 Unrecoverable Media Errors: 0 00:09:29.941 Lifetime Error Log Entries: 0 00:09:29.941 Warning Temperature Time: 0 minutes 00:09:29.941 Critical Temperature Time: 0 minutes 00:09:29.941 00:09:29.941 Number of Queues 00:09:29.941 ================ 00:09:29.941 Number of I/O Submission Queues: 64 00:09:29.941 Number of I/O Completion Queues: 64 00:09:29.941 00:09:29.941 ZNS Specific Controller Data 00:09:29.941 ============================ 00:09:29.941 Zone Append Size Limit: 0 00:09:29.941 00:09:29.941 00:09:29.941 Active Namespaces 00:09:29.941 ================= 00:09:29.941 Namespace ID:1 00:09:29.941 Error Recovery Timeout: Unlimited 00:09:29.941 Command Set Identifier: NVM (00h) 00:09:29.941 Deallocate: Supported 00:09:29.941 Deallocated/Unwritten Error: Supported 00:09:29.941 Deallocated Read Value: All 0x00 00:09:29.941 Deallocate in Write Zeroes: Not Supported 00:09:29.941 Deallocated Guard Field: 0xFFFF 00:09:29.941 Flush: Supported 00:09:29.941 Reservation: Not Supported 00:09:29.941 Namespace Sharing Capabilities: Multiple Controllers 00:09:29.941 Size (in LBAs): 262144 (1GiB) 00:09:29.941 Capacity (in LBAs): 262144 (1GiB) 00:09:29.941 Utilization (in LBAs): 262144 (1GiB) 00:09:29.941 Thin Provisioning: Not Supported 00:09:29.941 Per-NS Atomic Units: No 00:09:29.941 Maximum Single Source Range Length: 128 00:09:29.941 Maximum Copy Length: 128 00:09:29.941 Maximum Source Range Count: 128 00:09:29.941 NGUID/EUI64 Never Reused: No 00:09:29.941 Namespace Write Protected: No 00:09:29.941 Endurance group ID: 1 00:09:29.941 Number of LBA Formats: 8 00:09:29.941 Current LBA Format: LBA Format #04 00:09:29.941 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.941 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.941 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.941 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.941 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.941 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.941 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.941 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.941 00:09:29.941 Get Feature FDP: 00:09:29.941 ================ 00:09:29.941 Enabled: Yes 00:09:29.941 FDP configuration index: 0 00:09:29.941 00:09:29.941 FDP configurations log page 00:09:29.941 =========================== 00:09:29.941 Number of FDP configurations: 1 00:09:29.941 Version: 0 00:09:29.941 Size: 112 00:09:29.941 FDP Configuration Descriptor: 0 00:09:29.941 Descriptor Size: 96 00:09:29.941 Reclaim Group Identifier format: 2 00:09:29.941 FDP Volatile Write Cache: Not Present 00:09:29.941 FDP Configuration: Valid 00:09:29.941 Vendor Specific Size: 0 00:09:29.941 Number of Reclaim Groups: 2 00:09:29.942 Number of Recalim Unit Handles: 8 00:09:29.942 Max Placement Identifiers: 128 00:09:29.942 Number of Namespaces Suppprted: 256 00:09:29.942 Reclaim unit Nominal Size: 6000000 bytes 00:09:29.942 Estimated Reclaim Unit Time Limit: Not Reported 00:09:29.942 RUH Desc #000: RUH Type: Initially Isolated 00:09:29.942 RUH Desc #001: RUH Type: Initially Isolated 00:09:29.942 RUH Desc #002: RUH Type: Initially Isolated 00:09:29.942 RUH Desc #003: RUH Type: Initially Isolated 00:09:29.942 RUH Desc #004: RUH Type: Initially Isolated 00:09:29.942 RUH Desc #005: RUH Type: Initially Isolated 00:09:29.942 RUH Desc #006: RUH Type: Initially Isolated 00:09:29.942 RUH Desc #007: RUH Type: Initially Isolated 00:09:29.942 00:09:29.942 FDP reclaim unit handle usage log page 00:09:29.942 ====================================== 00:09:29.942 Number of Reclaim Unit Handles: 8 00:09:29.942 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:29.942 RUH Usage Desc #001: RUH Attributes: Unused 00:09:29.942 RUH Usage Desc #002: RUH Attributes: Unused 00:09:29.942 RUH Usage Desc #003: RUH Attributes: Unused 00:09:29.942 RUH Usage Desc #004: RUH Attributes: Unused 00:09:29.942 RUH Usage Desc #005: RUH Attributes: Unused 00:09:29.942 RUH Usage Desc #006: RUH Attributes: Unused 00:09:29.942 RUH Usage Desc #007: RUH Attributes: Unused 00:09:29.942 00:09:29.942 FDP statistics log page 00:09:29.942 ======================= 00:09:29.942 Host bytes with metadata written: 416522240 00:09:29.942 Media bytes with metadata written: 416567296 00:09:29.942 Media bytes erased: 0 00:09:29.942 00:09:29.942 FDP events log page 00:09:29.942 =================== 00:09:29.942 Number of FDP events: 0 00:09:29.942 00:09:29.942 NVM Specific Namespace Data 00:09:29.942 =========================== 00:09:29.942 Logical Block Storage Tag Mask: 0 00:09:29.942 Protection Information Capabilities: 00:09:29.942 16b Guard Protection Information Storage Tag Support: No 00:09:29.942 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.942 Storage Tag Check Read Support: No 00:09:29.942 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.942 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.942 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.942 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.942 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.942 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.942 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.942 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.942 00:09:29.942 real 0m1.208s 00:09:29.942 user 0m0.436s 00:09:29.942 sys 0m0.543s 00:09:29.942 19:27:57 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.942 ************************************ 00:09:29.942 END TEST nvme_identify 00:09:29.942 ************************************ 00:09:29.942 19:27:57 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:29.942 19:27:57 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:29.942 19:27:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.942 19:27:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.942 19:27:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:29.942 ************************************ 00:09:29.942 START TEST nvme_perf 00:09:29.942 ************************************ 00:09:29.942 19:27:57 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:09:29.942 19:27:57 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:31.331 Initializing NVMe Controllers 00:09:31.331 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:31.331 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:31.331 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:31.331 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:31.331 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:31.331 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:31.331 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:31.331 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:31.331 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:31.331 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:31.331 Initialization complete. Launching workers. 00:09:31.331 ======================================================== 00:09:31.331 Latency(us) 00:09:31.331 Device Information : IOPS MiB/s Average min max 00:09:31.331 PCIE (0000:00:13.0) NSID 1 from core 0: 7599.69 89.06 16865.05 9893.53 44232.11 00:09:31.331 PCIE (0000:00:10.0) NSID 1 from core 0: 7599.69 89.06 16838.01 9496.21 43393.11 00:09:31.331 PCIE (0000:00:11.0) NSID 1 from core 0: 7599.69 89.06 16812.27 9303.33 42470.65 00:09:31.331 PCIE (0000:00:12.0) NSID 1 from core 0: 7599.69 89.06 16783.92 8500.37 42206.74 00:09:31.331 PCIE (0000:00:12.0) NSID 2 from core 0: 7599.69 89.06 16755.45 8291.51 41377.14 00:09:31.331 PCIE (0000:00:12.0) NSID 3 from core 0: 7663.55 89.81 16588.87 8217.80 31353.94 00:09:31.331 ======================================================== 00:09:31.331 Total : 45662.01 535.10 16773.67 8217.80 44232.11 00:09:31.331 00:09:31.331 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:31.331 ================================================================================= 00:09:31.331 1.00000% : 11746.068us 00:09:31.331 10.00000% : 13712.148us 00:09:31.331 25.00000% : 14922.043us 00:09:31.331 50.00000% : 16333.588us 00:09:31.331 75.00000% : 18450.905us 00:09:31.331 90.00000% : 19761.625us 00:09:31.331 95.00000% : 20669.046us 00:09:31.331 98.00000% : 21979.766us 00:09:31.331 99.00000% : 35691.914us 00:09:31.331 99.50000% : 42951.286us 00:09:31.331 99.90000% : 44161.182us 00:09:31.331 99.99000% : 44362.831us 00:09:31.331 99.99900% : 44362.831us 00:09:31.331 99.99990% : 44362.831us 00:09:31.331 99.99999% : 44362.831us 00:09:31.331 00:09:31.331 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:31.331 ================================================================================= 00:09:31.331 1.00000% : 11443.594us 00:09:31.331 10.00000% : 13611.323us 00:09:31.331 25.00000% : 14922.043us 00:09:31.331 50.00000% : 16434.412us 00:09:31.331 75.00000% : 18350.080us 00:09:31.331 90.00000% : 19862.449us 00:09:31.331 95.00000% : 20870.695us 00:09:31.331 98.00000% : 21979.766us 00:09:31.331 99.00000% : 34280.369us 00:09:31.331 99.50000% : 42144.689us 00:09:31.331 99.90000% : 43354.585us 00:09:31.331 99.99000% : 43556.234us 00:09:31.331 99.99900% : 43556.234us 00:09:31.331 99.99990% : 43556.234us 00:09:31.331 99.99999% : 43556.234us 00:09:31.331 00:09:31.331 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:31.331 ================================================================================= 00:09:31.331 1.00000% : 11443.594us 00:09:31.331 10.00000% : 13510.498us 00:09:31.331 25.00000% : 14922.043us 00:09:31.331 50.00000% : 16434.412us 00:09:31.331 75.00000% : 18350.080us 00:09:31.331 90.00000% : 19761.625us 00:09:31.331 95.00000% : 20769.871us 00:09:31.331 98.00000% : 22181.415us 00:09:31.332 99.00000% : 32667.175us 00:09:31.332 99.50000% : 41136.443us 00:09:31.332 99.90000% : 42346.338us 00:09:31.332 99.99000% : 42547.988us 00:09:31.332 99.99900% : 42547.988us 00:09:31.332 99.99990% : 42547.988us 00:09:31.332 99.99999% : 42547.988us 00:09:31.332 00:09:31.332 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:31.332 ================================================================================= 00:09:31.332 1.00000% : 10687.409us 00:09:31.332 10.00000% : 13611.323us 00:09:31.332 25.00000% : 14922.043us 00:09:31.332 50.00000% : 16434.412us 00:09:31.332 75.00000% : 18450.905us 00:09:31.332 90.00000% : 19660.800us 00:09:31.332 95.00000% : 20467.397us 00:09:31.332 98.00000% : 21878.942us 00:09:31.332 99.00000% : 31860.578us 00:09:31.332 99.50000% : 41338.092us 00:09:31.332 99.90000% : 42144.689us 00:09:31.332 99.99000% : 42346.338us 00:09:31.332 99.99900% : 42346.338us 00:09:31.332 99.99990% : 42346.338us 00:09:31.332 99.99999% : 42346.338us 00:09:31.332 00:09:31.332 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:31.332 ================================================================================= 00:09:31.332 1.00000% : 10334.523us 00:09:31.332 10.00000% : 13611.323us 00:09:31.332 25.00000% : 14922.043us 00:09:31.332 50.00000% : 16434.412us 00:09:31.332 75.00000% : 18350.080us 00:09:31.332 90.00000% : 19761.625us 00:09:31.332 95.00000% : 20669.046us 00:09:31.332 98.00000% : 21677.292us 00:09:31.332 99.00000% : 31053.982us 00:09:31.332 99.50000% : 40531.495us 00:09:31.332 99.90000% : 41338.092us 00:09:31.332 99.99000% : 41539.742us 00:09:31.332 99.99900% : 41539.742us 00:09:31.332 99.99990% : 41539.742us 00:09:31.332 99.99999% : 41539.742us 00:09:31.332 00:09:31.332 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:31.332 ================================================================================= 00:09:31.332 1.00000% : 10384.935us 00:09:31.332 10.00000% : 13712.148us 00:09:31.332 25.00000% : 14821.218us 00:09:31.332 50.00000% : 16333.588us 00:09:31.332 75.00000% : 18350.080us 00:09:31.332 90.00000% : 19761.625us 00:09:31.332 95.00000% : 20467.397us 00:09:31.332 98.00000% : 21374.818us 00:09:31.332 99.00000% : 23391.311us 00:09:31.332 99.50000% : 30045.735us 00:09:31.332 99.90000% : 31255.631us 00:09:31.332 99.99000% : 31457.280us 00:09:31.332 99.99900% : 31457.280us 00:09:31.332 99.99990% : 31457.280us 00:09:31.332 99.99999% : 31457.280us 00:09:31.332 00:09:31.332 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:31.332 ============================================================================== 00:09:31.332 Range in us Cumulative IO count 00:09:31.332 9880.812 - 9931.225: 0.0657% ( 5) 00:09:31.332 9931.225 - 9981.637: 0.1050% ( 3) 00:09:31.332 9981.637 - 10032.049: 0.1182% ( 1) 00:09:31.332 10032.049 - 10082.462: 0.1838% ( 5) 00:09:31.332 10082.462 - 10132.874: 0.2101% ( 2) 00:09:31.332 10132.874 - 10183.286: 0.2757% ( 5) 00:09:31.332 10183.286 - 10233.698: 0.3020% ( 2) 00:09:31.332 10284.111 - 10334.523: 0.3414% ( 3) 00:09:31.332 10334.523 - 10384.935: 0.3939% ( 4) 00:09:31.332 10384.935 - 10435.348: 0.4202% ( 2) 00:09:31.332 10435.348 - 10485.760: 0.4464% ( 2) 00:09:31.332 10485.760 - 10536.172: 0.4727% ( 2) 00:09:31.332 10536.172 - 10586.585: 0.4989% ( 2) 00:09:31.332 10586.585 - 10636.997: 0.5383% ( 3) 00:09:31.332 10636.997 - 10687.409: 0.5646% ( 2) 00:09:31.332 10687.409 - 10737.822: 0.5909% ( 2) 00:09:31.332 10737.822 - 10788.234: 0.6303% ( 3) 00:09:31.332 10788.234 - 10838.646: 0.6565% ( 2) 00:09:31.332 10838.646 - 10889.058: 0.6959% ( 3) 00:09:31.332 10889.058 - 10939.471: 0.7353% ( 3) 00:09:31.332 10939.471 - 10989.883: 0.7616% ( 2) 00:09:31.332 10989.883 - 11040.295: 0.8009% ( 3) 00:09:31.332 11040.295 - 11090.708: 0.8272% ( 2) 00:09:31.332 11090.708 - 11141.120: 0.8403% ( 1) 00:09:31.332 11544.418 - 11594.831: 0.8535% ( 1) 00:09:31.332 11594.831 - 11645.243: 0.8929% ( 3) 00:09:31.332 11645.243 - 11695.655: 0.9191% ( 2) 00:09:31.332 11695.655 - 11746.068: 1.0898% ( 13) 00:09:31.332 11746.068 - 11796.480: 1.1555% ( 5) 00:09:31.332 11796.480 - 11846.892: 1.2605% ( 8) 00:09:31.332 11846.892 - 11897.305: 1.2999% ( 3) 00:09:31.332 11897.305 - 11947.717: 1.3655% ( 5) 00:09:31.332 11947.717 - 11998.129: 1.4443% ( 6) 00:09:31.332 11998.129 - 12048.542: 1.5231% ( 6) 00:09:31.332 12048.542 - 12098.954: 1.5888% ( 5) 00:09:31.332 12098.954 - 12149.366: 1.6675% ( 6) 00:09:31.332 12149.366 - 12199.778: 1.7726% ( 8) 00:09:31.332 12199.778 - 12250.191: 1.8776% ( 8) 00:09:31.332 12250.191 - 12300.603: 1.9958% ( 9) 00:09:31.332 12300.603 - 12351.015: 2.1402% ( 11) 00:09:31.332 12351.015 - 12401.428: 2.3109% ( 13) 00:09:31.332 12401.428 - 12451.840: 2.4422% ( 10) 00:09:31.332 12451.840 - 12502.252: 2.5867% ( 11) 00:09:31.332 12502.252 - 12552.665: 2.7836% ( 15) 00:09:31.332 12552.665 - 12603.077: 2.9806% ( 15) 00:09:31.332 12603.077 - 12653.489: 3.3220% ( 26) 00:09:31.332 12653.489 - 12703.902: 3.5583% ( 18) 00:09:31.332 12703.902 - 12754.314: 3.7684% ( 16) 00:09:31.332 12754.314 - 12804.726: 4.0047% ( 18) 00:09:31.332 12804.726 - 12855.138: 4.2542% ( 19) 00:09:31.332 12855.138 - 12905.551: 4.5037% ( 19) 00:09:31.332 12905.551 - 13006.375: 4.9632% ( 35) 00:09:31.332 13006.375 - 13107.200: 5.6066% ( 49) 00:09:31.332 13107.200 - 13208.025: 6.2631% ( 50) 00:09:31.332 13208.025 - 13308.849: 6.8277% ( 43) 00:09:31.332 13308.849 - 13409.674: 7.4974% ( 51) 00:09:31.332 13409.674 - 13510.498: 8.1408% ( 49) 00:09:31.332 13510.498 - 13611.323: 9.0336% ( 68) 00:09:31.332 13611.323 - 13712.148: 10.0840% ( 80) 00:09:31.332 13712.148 - 13812.972: 11.1738% ( 83) 00:09:31.332 13812.972 - 13913.797: 12.3950% ( 93) 00:09:31.332 13913.797 - 14014.622: 13.4979% ( 84) 00:09:31.332 14014.622 - 14115.446: 14.7847% ( 98) 00:09:31.332 14115.446 - 14216.271: 16.0452% ( 96) 00:09:31.332 14216.271 - 14317.095: 17.3188% ( 97) 00:09:31.332 14317.095 - 14417.920: 18.5268% ( 92) 00:09:31.332 14417.920 - 14518.745: 19.7742% ( 95) 00:09:31.332 14518.745 - 14619.569: 21.1791% ( 107) 00:09:31.332 14619.569 - 14720.394: 22.4921% ( 100) 00:09:31.332 14720.394 - 14821.218: 23.8577% ( 104) 00:09:31.332 14821.218 - 14922.043: 25.3808% ( 116) 00:09:31.332 14922.043 - 15022.868: 27.1140% ( 132) 00:09:31.332 15022.868 - 15123.692: 28.9785% ( 142) 00:09:31.332 15123.692 - 15224.517: 30.8824% ( 145) 00:09:31.332 15224.517 - 15325.342: 32.7600% ( 143) 00:09:31.332 15325.342 - 15426.166: 34.5851% ( 139) 00:09:31.332 15426.166 - 15526.991: 36.3708% ( 136) 00:09:31.332 15526.991 - 15627.815: 38.0909% ( 131) 00:09:31.332 15627.815 - 15728.640: 39.8897% ( 137) 00:09:31.332 15728.640 - 15829.465: 41.7279% ( 140) 00:09:31.332 15829.465 - 15930.289: 43.4743% ( 133) 00:09:31.332 15930.289 - 16031.114: 45.2206% ( 133) 00:09:31.332 16031.114 - 16131.938: 47.0063% ( 136) 00:09:31.332 16131.938 - 16232.763: 48.6607% ( 126) 00:09:31.332 16232.763 - 16333.588: 50.2363% ( 120) 00:09:31.332 16333.588 - 16434.412: 51.9170% ( 128) 00:09:31.332 16434.412 - 16535.237: 53.5583% ( 125) 00:09:31.332 16535.237 - 16636.062: 55.2521% ( 129) 00:09:31.332 16636.062 - 16736.886: 56.7227% ( 112) 00:09:31.332 16736.886 - 16837.711: 58.0882% ( 104) 00:09:31.332 16837.711 - 16938.535: 59.4275% ( 102) 00:09:31.332 16938.535 - 17039.360: 60.6486% ( 93) 00:09:31.332 17039.360 - 17140.185: 61.8829% ( 94) 00:09:31.332 17140.185 - 17241.009: 63.2747% ( 106) 00:09:31.332 17241.009 - 17341.834: 64.4827% ( 92) 00:09:31.332 17341.834 - 17442.658: 65.4412% ( 73) 00:09:31.332 17442.658 - 17543.483: 66.6492% ( 92) 00:09:31.332 17543.483 - 17644.308: 67.6208% ( 74) 00:09:31.332 17644.308 - 17745.132: 68.5399% ( 70) 00:09:31.332 17745.132 - 17845.957: 69.2883% ( 57) 00:09:31.332 17845.957 - 17946.782: 70.1024% ( 62) 00:09:31.332 17946.782 - 18047.606: 71.0347% ( 71) 00:09:31.332 18047.606 - 18148.431: 72.0063% ( 74) 00:09:31.332 18148.431 - 18249.255: 73.0961% ( 83) 00:09:31.332 18249.255 - 18350.080: 74.3304% ( 94) 00:09:31.332 18350.080 - 18450.905: 75.5777% ( 95) 00:09:31.332 18450.905 - 18551.729: 76.8382% ( 96) 00:09:31.332 18551.729 - 18652.554: 78.0725% ( 94) 00:09:31.332 18652.554 - 18753.378: 79.1886% ( 85) 00:09:31.332 18753.378 - 18854.203: 80.3571% ( 89) 00:09:31.332 18854.203 - 18955.028: 81.5651% ( 92) 00:09:31.332 18955.028 - 19055.852: 82.8650% ( 99) 00:09:31.332 19055.852 - 19156.677: 84.1518% ( 98) 00:09:31.332 19156.677 - 19257.502: 85.1759% ( 78) 00:09:31.332 19257.502 - 19358.326: 86.3314% ( 88) 00:09:31.332 19358.326 - 19459.151: 87.3293% ( 76) 00:09:31.332 19459.151 - 19559.975: 88.2353% ( 69) 00:09:31.332 19559.975 - 19660.800: 89.1544% ( 70) 00:09:31.332 19660.800 - 19761.625: 90.0341% ( 67) 00:09:31.332 19761.625 - 19862.449: 90.8482% ( 62) 00:09:31.332 19862.449 - 19963.274: 91.4916% ( 49) 00:09:31.332 19963.274 - 20064.098: 92.2663% ( 59) 00:09:31.332 20064.098 - 20164.923: 92.8309% ( 43) 00:09:31.332 20164.923 - 20265.748: 93.4480% ( 47) 00:09:31.332 20265.748 - 20366.572: 93.9601% ( 39) 00:09:31.332 20366.572 - 20467.397: 94.3934% ( 33) 00:09:31.332 20467.397 - 20568.222: 94.7610% ( 28) 00:09:31.332 20568.222 - 20669.046: 95.0893% ( 25) 00:09:31.332 20669.046 - 20769.871: 95.4175% ( 25) 00:09:31.332 20769.871 - 20870.695: 95.7589% ( 26) 00:09:31.332 20870.695 - 20971.520: 96.0347% ( 21) 00:09:31.332 20971.520 - 21072.345: 96.2973% ( 20) 00:09:31.332 21072.345 - 21173.169: 96.4942% ( 15) 00:09:31.332 21173.169 - 21273.994: 96.7700% ( 21) 00:09:31.333 21273.994 - 21374.818: 96.9932% ( 17) 00:09:31.333 21374.818 - 21475.643: 97.2033% ( 16) 00:09:31.333 21475.643 - 21576.468: 97.4265% ( 17) 00:09:31.333 21576.468 - 21677.292: 97.6497% ( 17) 00:09:31.333 21677.292 - 21778.117: 97.8072% ( 12) 00:09:31.333 21778.117 - 21878.942: 97.9254% ( 9) 00:09:31.333 21878.942 - 21979.766: 98.0567% ( 10) 00:09:31.333 21979.766 - 22080.591: 98.1618% ( 8) 00:09:31.333 22080.591 - 22181.415: 98.2274% ( 5) 00:09:31.333 22181.415 - 22282.240: 98.2799% ( 4) 00:09:31.333 22282.240 - 22383.065: 98.3193% ( 3) 00:09:31.333 34280.369 - 34482.018: 98.4375% ( 9) 00:09:31.333 34482.018 - 34683.668: 98.5557% ( 9) 00:09:31.333 34683.668 - 34885.317: 98.6607% ( 8) 00:09:31.333 34885.317 - 35086.966: 98.7789% ( 9) 00:09:31.333 35086.966 - 35288.615: 98.8839% ( 8) 00:09:31.333 35288.615 - 35490.265: 98.9758% ( 7) 00:09:31.333 35490.265 - 35691.914: 99.0809% ( 8) 00:09:31.333 35691.914 - 35893.563: 99.1597% ( 6) 00:09:31.333 41943.040 - 42144.689: 99.2253% ( 5) 00:09:31.333 42144.689 - 42346.338: 99.3041% ( 6) 00:09:31.333 42346.338 - 42547.988: 99.3697% ( 5) 00:09:31.333 42547.988 - 42749.637: 99.4485% ( 6) 00:09:31.333 42749.637 - 42951.286: 99.5273% ( 6) 00:09:31.333 42951.286 - 43152.935: 99.5930% ( 5) 00:09:31.333 43152.935 - 43354.585: 99.6586% ( 5) 00:09:31.333 43354.585 - 43556.234: 99.7374% ( 6) 00:09:31.333 43556.234 - 43757.883: 99.8030% ( 5) 00:09:31.333 43757.883 - 43959.532: 99.8950% ( 7) 00:09:31.333 43959.532 - 44161.182: 99.9737% ( 6) 00:09:31.333 44161.182 - 44362.831: 100.0000% ( 2) 00:09:31.333 00:09:31.333 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:31.333 ============================================================================== 00:09:31.333 Range in us Cumulative IO count 00:09:31.333 9477.514 - 9527.926: 0.0394% ( 3) 00:09:31.333 9527.926 - 9578.338: 0.0657% ( 2) 00:09:31.333 9578.338 - 9628.751: 0.1313% ( 5) 00:09:31.333 9679.163 - 9729.575: 0.1576% ( 2) 00:09:31.333 9729.575 - 9779.988: 0.1970% ( 3) 00:09:31.333 9779.988 - 9830.400: 0.2363% ( 3) 00:09:31.333 9830.400 - 9880.812: 0.2495% ( 1) 00:09:31.333 9880.812 - 9931.225: 0.2757% ( 2) 00:09:31.333 9931.225 - 9981.637: 0.2889% ( 1) 00:09:31.333 9981.637 - 10032.049: 0.3283% ( 3) 00:09:31.333 10032.049 - 10082.462: 0.3939% ( 5) 00:09:31.333 10132.874 - 10183.286: 0.4202% ( 2) 00:09:31.333 10183.286 - 10233.698: 0.4333% ( 1) 00:09:31.333 10233.698 - 10284.111: 0.5121% ( 6) 00:09:31.333 10334.523 - 10384.935: 0.5252% ( 1) 00:09:31.333 10384.935 - 10435.348: 0.5777% ( 4) 00:09:31.333 10435.348 - 10485.760: 0.6040% ( 2) 00:09:31.333 10485.760 - 10536.172: 0.6434% ( 3) 00:09:31.333 10536.172 - 10586.585: 0.6696% ( 2) 00:09:31.333 10586.585 - 10636.997: 0.7090% ( 3) 00:09:31.333 10636.997 - 10687.409: 0.7222% ( 1) 00:09:31.333 10687.409 - 10737.822: 0.7484% ( 2) 00:09:31.333 10737.822 - 10788.234: 0.7747% ( 2) 00:09:31.333 10788.234 - 10838.646: 0.8141% ( 3) 00:09:31.333 10838.646 - 10889.058: 0.8403% ( 2) 00:09:31.333 11241.945 - 11292.357: 0.8797% ( 3) 00:09:31.333 11292.357 - 11342.769: 0.9585% ( 6) 00:09:31.333 11342.769 - 11393.182: 0.9979% ( 3) 00:09:31.333 11393.182 - 11443.594: 1.0110% ( 1) 00:09:31.333 11494.006 - 11544.418: 1.0504% ( 3) 00:09:31.333 11544.418 - 11594.831: 1.0767% ( 2) 00:09:31.333 11594.831 - 11645.243: 1.0898% ( 1) 00:09:31.333 11645.243 - 11695.655: 1.1686% ( 6) 00:09:31.333 11695.655 - 11746.068: 1.1817% ( 1) 00:09:31.333 11746.068 - 11796.480: 1.2211% ( 3) 00:09:31.333 11796.480 - 11846.892: 1.2736% ( 4) 00:09:31.333 11846.892 - 11897.305: 1.3787% ( 8) 00:09:31.333 11897.305 - 11947.717: 1.4575% ( 6) 00:09:31.333 11947.717 - 11998.129: 1.5625% ( 8) 00:09:31.333 11998.129 - 12048.542: 1.6150% ( 4) 00:09:31.333 12048.542 - 12098.954: 1.7201% ( 8) 00:09:31.333 12098.954 - 12149.366: 1.8514% ( 10) 00:09:31.333 12149.366 - 12199.778: 1.9170% ( 5) 00:09:31.333 12199.778 - 12250.191: 2.0877% ( 13) 00:09:31.333 12250.191 - 12300.603: 2.2059% ( 9) 00:09:31.333 12300.603 - 12351.015: 2.3372% ( 10) 00:09:31.333 12351.015 - 12401.428: 2.5079% ( 13) 00:09:31.333 12401.428 - 12451.840: 2.7311% ( 17) 00:09:31.333 12451.840 - 12502.252: 3.0593% ( 25) 00:09:31.333 12502.252 - 12552.665: 3.3088% ( 19) 00:09:31.333 12552.665 - 12603.077: 3.4926% ( 14) 00:09:31.333 12603.077 - 12653.489: 3.7946% ( 23) 00:09:31.333 12653.489 - 12703.902: 4.0047% ( 16) 00:09:31.333 12703.902 - 12754.314: 4.2411% ( 18) 00:09:31.333 12754.314 - 12804.726: 4.5562% ( 24) 00:09:31.333 12804.726 - 12855.138: 4.7400% ( 14) 00:09:31.333 12855.138 - 12905.551: 5.0551% ( 24) 00:09:31.333 12905.551 - 13006.375: 5.6197% ( 43) 00:09:31.333 13006.375 - 13107.200: 6.2894% ( 51) 00:09:31.333 13107.200 - 13208.025: 7.0247% ( 56) 00:09:31.333 13208.025 - 13308.849: 7.7075% ( 52) 00:09:31.333 13308.849 - 13409.674: 8.5741% ( 66) 00:09:31.333 13409.674 - 13510.498: 9.6245% ( 80) 00:09:31.333 13510.498 - 13611.323: 10.2022% ( 44) 00:09:31.333 13611.323 - 13712.148: 11.0557% ( 65) 00:09:31.333 13712.148 - 13812.972: 11.8829% ( 63) 00:09:31.333 13812.972 - 13913.797: 13.0252% ( 87) 00:09:31.333 13913.797 - 14014.622: 13.8130% ( 60) 00:09:31.333 14014.622 - 14115.446: 14.8241% ( 77) 00:09:31.333 14115.446 - 14216.271: 15.8876% ( 81) 00:09:31.333 14216.271 - 14317.095: 17.2138% ( 101) 00:09:31.333 14317.095 - 14417.920: 18.6975% ( 113) 00:09:31.333 14417.920 - 14518.745: 19.9055% ( 92) 00:09:31.333 14518.745 - 14619.569: 21.0872% ( 90) 00:09:31.333 14619.569 - 14720.394: 22.4133% ( 101) 00:09:31.333 14720.394 - 14821.218: 23.9233% ( 115) 00:09:31.333 14821.218 - 14922.043: 25.4727% ( 118) 00:09:31.333 14922.043 - 15022.868: 26.8908% ( 108) 00:09:31.333 15022.868 - 15123.692: 28.4139% ( 116) 00:09:31.333 15123.692 - 15224.517: 30.1996% ( 136) 00:09:31.333 15224.517 - 15325.342: 32.0772% ( 143) 00:09:31.333 15325.342 - 15426.166: 33.6660% ( 121) 00:09:31.333 15426.166 - 15526.991: 35.4517% ( 136) 00:09:31.333 15526.991 - 15627.815: 37.2899% ( 140) 00:09:31.333 15627.815 - 15728.640: 39.1150% ( 139) 00:09:31.333 15728.640 - 15829.465: 41.2159% ( 160) 00:09:31.333 15829.465 - 15930.289: 42.9622% ( 133) 00:09:31.333 15930.289 - 16031.114: 44.7085% ( 133) 00:09:31.333 16031.114 - 16131.938: 46.2710% ( 119) 00:09:31.333 16131.938 - 16232.763: 48.0830% ( 138) 00:09:31.333 16232.763 - 16333.588: 49.6717% ( 121) 00:09:31.333 16333.588 - 16434.412: 51.4968% ( 139) 00:09:31.333 16434.412 - 16535.237: 53.2563% ( 134) 00:09:31.333 16535.237 - 16636.062: 54.6087% ( 103) 00:09:31.333 16636.062 - 16736.886: 56.2894% ( 128) 00:09:31.333 16736.886 - 16837.711: 57.6418% ( 103) 00:09:31.333 16837.711 - 16938.535: 59.1518% ( 115) 00:09:31.333 16938.535 - 17039.360: 60.5305% ( 105) 00:09:31.333 17039.360 - 17140.185: 61.8435% ( 100) 00:09:31.333 17140.185 - 17241.009: 63.1565% ( 100) 00:09:31.333 17241.009 - 17341.834: 64.4170% ( 96) 00:09:31.333 17341.834 - 17442.658: 65.6250% ( 92) 00:09:31.333 17442.658 - 17543.483: 66.5310% ( 69) 00:09:31.333 17543.483 - 17644.308: 67.4764% ( 72) 00:09:31.333 17644.308 - 17745.132: 68.5137% ( 79) 00:09:31.333 17745.132 - 17845.957: 69.7610% ( 95) 00:09:31.333 17845.957 - 17946.782: 71.0609% ( 99) 00:09:31.333 17946.782 - 18047.606: 72.0457% ( 75) 00:09:31.333 18047.606 - 18148.431: 73.0305% ( 75) 00:09:31.333 18148.431 - 18249.255: 74.4223% ( 106) 00:09:31.333 18249.255 - 18350.080: 75.4070% ( 75) 00:09:31.333 18350.080 - 18450.905: 76.6938% ( 98) 00:09:31.333 18450.905 - 18551.729: 77.7180% ( 78) 00:09:31.333 18551.729 - 18652.554: 78.8603% ( 87) 00:09:31.333 18652.554 - 18753.378: 79.8976% ( 79) 00:09:31.333 18753.378 - 18854.203: 80.9217% ( 78) 00:09:31.333 18854.203 - 18955.028: 81.9853% ( 81) 00:09:31.333 18955.028 - 19055.852: 83.0095% ( 78) 00:09:31.333 19055.852 - 19156.677: 83.9023% ( 68) 00:09:31.333 19156.677 - 19257.502: 84.8871% ( 75) 00:09:31.333 19257.502 - 19358.326: 86.0294% ( 87) 00:09:31.333 19358.326 - 19459.151: 87.0798% ( 80) 00:09:31.333 19459.151 - 19559.975: 87.9596% ( 67) 00:09:31.333 19559.975 - 19660.800: 88.7211% ( 58) 00:09:31.333 19660.800 - 19761.625: 89.6534% ( 71) 00:09:31.333 19761.625 - 19862.449: 90.4412% ( 60) 00:09:31.333 19862.449 - 19963.274: 91.0583% ( 47) 00:09:31.333 19963.274 - 20064.098: 91.8986% ( 64) 00:09:31.333 20064.098 - 20164.923: 92.4895% ( 45) 00:09:31.333 20164.923 - 20265.748: 92.9622% ( 36) 00:09:31.333 20265.748 - 20366.572: 93.6318% ( 51) 00:09:31.333 20366.572 - 20467.397: 94.0126% ( 29) 00:09:31.333 20467.397 - 20568.222: 94.3934% ( 29) 00:09:31.333 20568.222 - 20669.046: 94.6297% ( 18) 00:09:31.333 20669.046 - 20769.871: 94.9974% ( 28) 00:09:31.333 20769.871 - 20870.695: 95.4438% ( 34) 00:09:31.333 20870.695 - 20971.520: 95.7589% ( 24) 00:09:31.333 20971.520 - 21072.345: 96.1397% ( 29) 00:09:31.333 21072.345 - 21173.169: 96.4286% ( 22) 00:09:31.333 21173.169 - 21273.994: 96.7700% ( 26) 00:09:31.333 21273.994 - 21374.818: 97.0194% ( 19) 00:09:31.333 21374.818 - 21475.643: 97.2295% ( 16) 00:09:31.333 21475.643 - 21576.468: 97.3871% ( 12) 00:09:31.333 21576.468 - 21677.292: 97.6366% ( 19) 00:09:31.333 21677.292 - 21778.117: 97.7810% ( 11) 00:09:31.333 21778.117 - 21878.942: 97.9911% ( 16) 00:09:31.333 21878.942 - 21979.766: 98.0961% ( 8) 00:09:31.333 21979.766 - 22080.591: 98.1618% ( 5) 00:09:31.333 22080.591 - 22181.415: 98.2012% ( 3) 00:09:31.333 22181.415 - 22282.240: 98.2537% ( 4) 00:09:31.334 22282.240 - 22383.065: 98.3062% ( 4) 00:09:31.334 22383.065 - 22483.889: 98.3193% ( 1) 00:09:31.334 32667.175 - 32868.825: 98.3718% ( 4) 00:09:31.334 32868.825 - 33070.474: 98.5032% ( 10) 00:09:31.334 33070.474 - 33272.123: 98.5688% ( 5) 00:09:31.334 33272.123 - 33473.772: 98.6870% ( 9) 00:09:31.334 33473.772 - 33675.422: 98.7920% ( 8) 00:09:31.334 33675.422 - 33877.071: 98.8708% ( 6) 00:09:31.334 33877.071 - 34078.720: 98.9758% ( 8) 00:09:31.334 34078.720 - 34280.369: 99.0678% ( 7) 00:09:31.334 34280.369 - 34482.018: 99.1597% ( 7) 00:09:31.334 40733.145 - 40934.794: 99.1728% ( 1) 00:09:31.334 40934.794 - 41136.443: 99.2253% ( 4) 00:09:31.334 41136.443 - 41338.092: 99.2778% ( 4) 00:09:31.334 41338.092 - 41539.742: 99.3566% ( 6) 00:09:31.334 41539.742 - 41741.391: 99.4223% ( 5) 00:09:31.334 41741.391 - 41943.040: 99.4617% ( 3) 00:09:31.334 41943.040 - 42144.689: 99.5404% ( 6) 00:09:31.334 42144.689 - 42346.338: 99.6061% ( 5) 00:09:31.334 42346.338 - 42547.988: 99.6717% ( 5) 00:09:31.334 42547.988 - 42749.637: 99.7374% ( 5) 00:09:31.334 42749.637 - 42951.286: 99.8030% ( 5) 00:09:31.334 42951.286 - 43152.935: 99.8818% ( 6) 00:09:31.334 43152.935 - 43354.585: 99.9606% ( 6) 00:09:31.334 43354.585 - 43556.234: 100.0000% ( 3) 00:09:31.334 00:09:31.334 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:31.334 ============================================================================== 00:09:31.334 Range in us Cumulative IO count 00:09:31.334 9275.865 - 9326.277: 0.0263% ( 2) 00:09:31.334 9326.277 - 9376.689: 0.0657% ( 3) 00:09:31.334 9376.689 - 9427.102: 0.0919% ( 2) 00:09:31.334 9427.102 - 9477.514: 0.1313% ( 3) 00:09:31.334 9477.514 - 9527.926: 0.1707% ( 3) 00:09:31.334 9527.926 - 9578.338: 0.1970% ( 2) 00:09:31.334 9578.338 - 9628.751: 0.2232% ( 2) 00:09:31.334 9628.751 - 9679.163: 0.2626% ( 3) 00:09:31.334 9679.163 - 9729.575: 0.3020% ( 3) 00:09:31.334 9729.575 - 9779.988: 0.3283% ( 2) 00:09:31.334 9779.988 - 9830.400: 0.3676% ( 3) 00:09:31.334 9830.400 - 9880.812: 0.3939% ( 2) 00:09:31.334 9880.812 - 9931.225: 0.4333% ( 3) 00:09:31.334 9931.225 - 9981.637: 0.4596% ( 2) 00:09:31.334 9981.637 - 10032.049: 0.4989% ( 3) 00:09:31.334 10032.049 - 10082.462: 0.5252% ( 2) 00:09:31.334 10082.462 - 10132.874: 0.5646% ( 3) 00:09:31.334 10132.874 - 10183.286: 0.5909% ( 2) 00:09:31.334 10183.286 - 10233.698: 0.6303% ( 3) 00:09:31.334 10233.698 - 10284.111: 0.6696% ( 3) 00:09:31.334 10284.111 - 10334.523: 0.6959% ( 2) 00:09:31.334 10334.523 - 10384.935: 0.7353% ( 3) 00:09:31.334 10384.935 - 10435.348: 0.7747% ( 3) 00:09:31.334 10435.348 - 10485.760: 0.8009% ( 2) 00:09:31.334 10485.760 - 10536.172: 0.8272% ( 2) 00:09:31.334 10536.172 - 10586.585: 0.8403% ( 1) 00:09:31.334 11141.120 - 11191.532: 0.8535% ( 1) 00:09:31.334 11191.532 - 11241.945: 0.8797% ( 2) 00:09:31.334 11241.945 - 11292.357: 0.9322% ( 4) 00:09:31.334 11292.357 - 11342.769: 0.9585% ( 2) 00:09:31.334 11342.769 - 11393.182: 0.9848% ( 2) 00:09:31.334 11393.182 - 11443.594: 1.0110% ( 2) 00:09:31.334 11443.594 - 11494.006: 1.0767% ( 5) 00:09:31.334 11494.006 - 11544.418: 1.1555% ( 6) 00:09:31.334 11544.418 - 11594.831: 1.2211% ( 5) 00:09:31.334 11594.831 - 11645.243: 1.2605% ( 3) 00:09:31.334 11645.243 - 11695.655: 1.2868% ( 2) 00:09:31.334 11695.655 - 11746.068: 1.3262% ( 3) 00:09:31.334 11746.068 - 11796.480: 1.3524% ( 2) 00:09:31.334 11796.480 - 11846.892: 1.3655% ( 1) 00:09:31.334 11846.892 - 11897.305: 1.4049% ( 3) 00:09:31.334 11897.305 - 11947.717: 1.5494% ( 11) 00:09:31.334 11947.717 - 11998.129: 1.6413% ( 7) 00:09:31.334 11998.129 - 12048.542: 1.7595% ( 9) 00:09:31.334 12048.542 - 12098.954: 1.8645% ( 8) 00:09:31.334 12098.954 - 12149.366: 2.1140% ( 19) 00:09:31.334 12149.366 - 12199.778: 2.2715% ( 12) 00:09:31.334 12199.778 - 12250.191: 2.4160% ( 11) 00:09:31.334 12250.191 - 12300.603: 2.5604% ( 11) 00:09:31.334 12300.603 - 12351.015: 2.7311% ( 13) 00:09:31.334 12351.015 - 12401.428: 2.8361% ( 8) 00:09:31.334 12401.428 - 12451.840: 3.0068% ( 13) 00:09:31.334 12451.840 - 12502.252: 3.2432% ( 18) 00:09:31.334 12502.252 - 12552.665: 3.4664% ( 17) 00:09:31.334 12552.665 - 12603.077: 3.7027% ( 18) 00:09:31.334 12603.077 - 12653.489: 4.0047% ( 23) 00:09:31.334 12653.489 - 12703.902: 4.3330% ( 25) 00:09:31.334 12703.902 - 12754.314: 4.6350% ( 23) 00:09:31.334 12754.314 - 12804.726: 4.8713% ( 18) 00:09:31.334 12804.726 - 12855.138: 5.1733% ( 23) 00:09:31.334 12855.138 - 12905.551: 5.5410% ( 28) 00:09:31.334 12905.551 - 13006.375: 6.1187% ( 44) 00:09:31.334 13006.375 - 13107.200: 6.8671% ( 57) 00:09:31.334 13107.200 - 13208.025: 7.6287% ( 58) 00:09:31.334 13208.025 - 13308.849: 8.4821% ( 65) 00:09:31.334 13308.849 - 13409.674: 9.2174% ( 56) 00:09:31.334 13409.674 - 13510.498: 10.0840% ( 66) 00:09:31.334 13510.498 - 13611.323: 10.9375% ( 65) 00:09:31.334 13611.323 - 13712.148: 11.8697% ( 71) 00:09:31.334 13712.148 - 13812.972: 12.7363% ( 66) 00:09:31.334 13812.972 - 13913.797: 13.5242% ( 60) 00:09:31.334 13913.797 - 14014.622: 14.3382% ( 62) 00:09:31.334 14014.622 - 14115.446: 15.3493% ( 77) 00:09:31.334 14115.446 - 14216.271: 16.3603% ( 77) 00:09:31.334 14216.271 - 14317.095: 17.3976% ( 79) 00:09:31.334 14317.095 - 14417.920: 18.7894% ( 106) 00:09:31.334 14417.920 - 14518.745: 20.0630% ( 97) 00:09:31.334 14518.745 - 14619.569: 21.5205% ( 111) 00:09:31.334 14619.569 - 14720.394: 23.0567% ( 117) 00:09:31.334 14720.394 - 14821.218: 24.6849% ( 124) 00:09:31.334 14821.218 - 14922.043: 26.2474% ( 119) 00:09:31.334 14922.043 - 15022.868: 27.9018% ( 126) 00:09:31.334 15022.868 - 15123.692: 29.4905% ( 121) 00:09:31.334 15123.692 - 15224.517: 31.1187% ( 124) 00:09:31.334 15224.517 - 15325.342: 32.9569% ( 140) 00:09:31.334 15325.342 - 15426.166: 34.6507% ( 129) 00:09:31.334 15426.166 - 15526.991: 36.3839% ( 132) 00:09:31.334 15526.991 - 15627.815: 38.1171% ( 132) 00:09:31.334 15627.815 - 15728.640: 39.9947% ( 143) 00:09:31.334 15728.640 - 15829.465: 41.7148% ( 131) 00:09:31.334 15829.465 - 15930.289: 43.2773% ( 119) 00:09:31.334 15930.289 - 16031.114: 44.9186% ( 125) 00:09:31.334 16031.114 - 16131.938: 46.3892% ( 112) 00:09:31.334 16131.938 - 16232.763: 47.8204% ( 109) 00:09:31.334 16232.763 - 16333.588: 49.1728% ( 103) 00:09:31.334 16333.588 - 16434.412: 50.4333% ( 96) 00:09:31.334 16434.412 - 16535.237: 51.5494% ( 85) 00:09:31.334 16535.237 - 16636.062: 52.6261% ( 82) 00:09:31.334 16636.062 - 16736.886: 53.9916% ( 104) 00:09:31.334 16736.886 - 16837.711: 55.3309% ( 102) 00:09:31.334 16837.711 - 16938.535: 56.6702% ( 102) 00:09:31.334 16938.535 - 17039.360: 58.1145% ( 110) 00:09:31.334 17039.360 - 17140.185: 59.5982% ( 113) 00:09:31.334 17140.185 - 17241.009: 61.0951% ( 114) 00:09:31.334 17241.009 - 17341.834: 62.4212% ( 101) 00:09:31.334 17341.834 - 17442.658: 63.7474% ( 101) 00:09:31.334 17442.658 - 17543.483: 65.0735% ( 101) 00:09:31.334 17543.483 - 17644.308: 66.5704% ( 114) 00:09:31.334 17644.308 - 17745.132: 67.8440% ( 97) 00:09:31.334 17745.132 - 17845.957: 69.1964% ( 103) 00:09:31.334 17845.957 - 17946.782: 70.4307% ( 94) 00:09:31.334 17946.782 - 18047.606: 71.5993% ( 89) 00:09:31.334 18047.606 - 18148.431: 72.8598% ( 96) 00:09:31.334 18148.431 - 18249.255: 73.9496% ( 83) 00:09:31.334 18249.255 - 18350.080: 75.0525% ( 84) 00:09:31.334 18350.080 - 18450.905: 76.2474% ( 91) 00:09:31.334 18450.905 - 18551.729: 77.4947% ( 95) 00:09:31.334 18551.729 - 18652.554: 78.7946% ( 99) 00:09:31.334 18652.554 - 18753.378: 80.0289% ( 94) 00:09:31.334 18753.378 - 18854.203: 81.1712% ( 87) 00:09:31.334 18854.203 - 18955.028: 82.4186% ( 95) 00:09:31.334 18955.028 - 19055.852: 83.8104% ( 106) 00:09:31.334 19055.852 - 19156.677: 85.0840% ( 97) 00:09:31.334 19156.677 - 19257.502: 86.2395% ( 88) 00:09:31.334 19257.502 - 19358.326: 87.2768% ( 79) 00:09:31.334 19358.326 - 19459.151: 88.2353% ( 73) 00:09:31.334 19459.151 - 19559.975: 88.9968% ( 58) 00:09:31.334 19559.975 - 19660.800: 89.7847% ( 60) 00:09:31.334 19660.800 - 19761.625: 90.5068% ( 55) 00:09:31.334 19761.625 - 19862.449: 91.0977% ( 45) 00:09:31.334 19862.449 - 19963.274: 91.5966% ( 38) 00:09:31.334 19963.274 - 20064.098: 92.0562% ( 35) 00:09:31.334 20064.098 - 20164.923: 92.4632% ( 31) 00:09:31.334 20164.923 - 20265.748: 93.0016% ( 41) 00:09:31.334 20265.748 - 20366.572: 93.4874% ( 37) 00:09:31.334 20366.572 - 20467.397: 93.9207% ( 33) 00:09:31.334 20467.397 - 20568.222: 94.3671% ( 34) 00:09:31.334 20568.222 - 20669.046: 94.8004% ( 33) 00:09:31.334 20669.046 - 20769.871: 95.2206% ( 32) 00:09:31.334 20769.871 - 20870.695: 95.5488% ( 25) 00:09:31.334 20870.695 - 20971.520: 95.8771% ( 25) 00:09:31.334 20971.520 - 21072.345: 96.2054% ( 25) 00:09:31.334 21072.345 - 21173.169: 96.5074% ( 23) 00:09:31.334 21173.169 - 21273.994: 96.7568% ( 19) 00:09:31.334 21273.994 - 21374.818: 96.9800% ( 17) 00:09:31.334 21374.818 - 21475.643: 97.1376% ( 12) 00:09:31.334 21475.643 - 21576.468: 97.3083% ( 13) 00:09:31.334 21576.468 - 21677.292: 97.4265% ( 9) 00:09:31.334 21677.292 - 21778.117: 97.5840% ( 12) 00:09:31.334 21778.117 - 21878.942: 97.7416% ( 12) 00:09:31.334 21878.942 - 21979.766: 97.8729% ( 10) 00:09:31.334 21979.766 - 22080.591: 97.9648% ( 7) 00:09:31.334 22080.591 - 22181.415: 98.0436% ( 6) 00:09:31.334 22181.415 - 22282.240: 98.1355% ( 7) 00:09:31.334 22282.240 - 22383.065: 98.2143% ( 6) 00:09:31.334 22383.065 - 22483.889: 98.2931% ( 6) 00:09:31.334 22483.889 - 22584.714: 98.3193% ( 2) 00:09:31.334 31053.982 - 31255.631: 98.3325% ( 1) 00:09:31.334 31255.631 - 31457.280: 98.4244% ( 7) 00:09:31.334 31457.280 - 31658.929: 98.5425% ( 9) 00:09:31.335 31658.929 - 31860.578: 98.6476% ( 8) 00:09:31.335 31860.578 - 32062.228: 98.7395% ( 7) 00:09:31.335 32062.228 - 32263.877: 98.8314% ( 7) 00:09:31.335 32263.877 - 32465.526: 98.9364% ( 8) 00:09:31.335 32465.526 - 32667.175: 99.0415% ( 8) 00:09:31.335 32667.175 - 32868.825: 99.1597% ( 9) 00:09:31.335 40128.197 - 40329.846: 99.2253% ( 5) 00:09:31.335 40329.846 - 40531.495: 99.3041% ( 6) 00:09:31.335 40531.495 - 40733.145: 99.3697% ( 5) 00:09:31.335 40733.145 - 40934.794: 99.4354% ( 5) 00:09:31.335 40934.794 - 41136.443: 99.5011% ( 5) 00:09:31.335 41136.443 - 41338.092: 99.5798% ( 6) 00:09:31.335 41338.092 - 41539.742: 99.6586% ( 6) 00:09:31.335 41539.742 - 41741.391: 99.7243% ( 5) 00:09:31.335 41741.391 - 41943.040: 99.7899% ( 5) 00:09:31.335 41943.040 - 42144.689: 99.8687% ( 6) 00:09:31.335 42144.689 - 42346.338: 99.9475% ( 6) 00:09:31.335 42346.338 - 42547.988: 100.0000% ( 4) 00:09:31.335 00:09:31.335 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:31.335 ============================================================================== 00:09:31.335 Range in us Cumulative IO count 00:09:31.335 8469.268 - 8519.680: 0.0263% ( 2) 00:09:31.335 8519.680 - 8570.092: 0.0919% ( 5) 00:09:31.335 8570.092 - 8620.505: 0.1444% ( 4) 00:09:31.335 8620.505 - 8670.917: 0.1707% ( 2) 00:09:31.335 8721.329 - 8771.742: 0.1970% ( 2) 00:09:31.335 8771.742 - 8822.154: 0.2495% ( 4) 00:09:31.335 8822.154 - 8872.566: 0.2757% ( 2) 00:09:31.335 8872.566 - 8922.978: 0.3151% ( 3) 00:09:31.335 8922.978 - 8973.391: 0.3414% ( 2) 00:09:31.335 8973.391 - 9023.803: 0.3808% ( 3) 00:09:31.335 9023.803 - 9074.215: 0.4070% ( 2) 00:09:31.335 9074.215 - 9124.628: 0.4464% ( 3) 00:09:31.335 9124.628 - 9175.040: 0.4727% ( 2) 00:09:31.335 9175.040 - 9225.452: 0.5121% ( 3) 00:09:31.335 9225.452 - 9275.865: 0.5383% ( 2) 00:09:31.335 9275.865 - 9326.277: 0.5777% ( 3) 00:09:31.335 9326.277 - 9376.689: 0.6171% ( 3) 00:09:31.335 9376.689 - 9427.102: 0.6434% ( 2) 00:09:31.335 9427.102 - 9477.514: 0.6828% ( 3) 00:09:31.335 9477.514 - 9527.926: 0.7222% ( 3) 00:09:31.335 9527.926 - 9578.338: 0.7484% ( 2) 00:09:31.335 9578.338 - 9628.751: 0.7747% ( 2) 00:09:31.335 9628.751 - 9679.163: 0.8141% ( 3) 00:09:31.335 9679.163 - 9729.575: 0.8403% ( 2) 00:09:31.335 10384.935 - 10435.348: 0.8666% ( 2) 00:09:31.335 10435.348 - 10485.760: 0.8929% ( 2) 00:09:31.335 10485.760 - 10536.172: 0.9191% ( 2) 00:09:31.335 10536.172 - 10586.585: 0.9454% ( 2) 00:09:31.335 10586.585 - 10636.997: 0.9848% ( 3) 00:09:31.335 10636.997 - 10687.409: 1.0242% ( 3) 00:09:31.335 10687.409 - 10737.822: 1.0898% ( 5) 00:09:31.335 10737.822 - 10788.234: 1.1555% ( 5) 00:09:31.335 10788.234 - 10838.646: 1.2080% ( 4) 00:09:31.335 10838.646 - 10889.058: 1.2342% ( 2) 00:09:31.335 10889.058 - 10939.471: 1.2868% ( 4) 00:09:31.335 10939.471 - 10989.883: 1.3130% ( 2) 00:09:31.335 10989.883 - 11040.295: 1.3393% ( 2) 00:09:31.335 11040.295 - 11090.708: 1.3524% ( 1) 00:09:31.335 11090.708 - 11141.120: 1.3787% ( 2) 00:09:31.335 11141.120 - 11191.532: 1.4049% ( 2) 00:09:31.335 11191.532 - 11241.945: 1.4443% ( 3) 00:09:31.335 11241.945 - 11292.357: 1.4706% ( 2) 00:09:31.335 11292.357 - 11342.769: 1.4968% ( 2) 00:09:31.335 11342.769 - 11393.182: 1.5231% ( 2) 00:09:31.335 11393.182 - 11443.594: 1.5362% ( 1) 00:09:31.335 11443.594 - 11494.006: 1.5756% ( 3) 00:09:31.335 11494.006 - 11544.418: 1.6282% ( 4) 00:09:31.335 11544.418 - 11594.831: 1.6938% ( 5) 00:09:31.335 11594.831 - 11645.243: 1.7332% ( 3) 00:09:31.335 11645.243 - 11695.655: 1.7726% ( 3) 00:09:31.335 11695.655 - 11746.068: 1.8120% ( 3) 00:09:31.335 11746.068 - 11796.480: 1.8382% ( 2) 00:09:31.335 11796.480 - 11846.892: 1.8645% ( 2) 00:09:31.335 11846.892 - 11897.305: 1.9039% ( 3) 00:09:31.335 11897.305 - 11947.717: 2.0221% ( 9) 00:09:31.335 11947.717 - 11998.129: 2.0483% ( 2) 00:09:31.335 11998.129 - 12048.542: 2.1928% ( 11) 00:09:31.335 12048.542 - 12098.954: 2.2847% ( 7) 00:09:31.335 12098.954 - 12149.366: 2.3766% ( 7) 00:09:31.335 12149.366 - 12199.778: 2.4816% ( 8) 00:09:31.335 12199.778 - 12250.191: 2.5867% ( 8) 00:09:31.335 12250.191 - 12300.603: 2.6786% ( 7) 00:09:31.335 12300.603 - 12351.015: 2.7836% ( 8) 00:09:31.335 12351.015 - 12401.428: 2.9018% ( 9) 00:09:31.335 12401.428 - 12451.840: 3.0200% ( 9) 00:09:31.335 12451.840 - 12502.252: 3.1513% ( 10) 00:09:31.335 12502.252 - 12552.665: 3.2826% ( 10) 00:09:31.335 12552.665 - 12603.077: 3.4270% ( 11) 00:09:31.335 12603.077 - 12653.489: 3.5977% ( 13) 00:09:31.335 12653.489 - 12703.902: 3.8472% ( 19) 00:09:31.335 12703.902 - 12754.314: 4.0572% ( 16) 00:09:31.335 12754.314 - 12804.726: 4.2805% ( 17) 00:09:31.335 12804.726 - 12855.138: 4.5956% ( 24) 00:09:31.335 12855.138 - 12905.551: 4.8713% ( 21) 00:09:31.335 12905.551 - 13006.375: 5.5016% ( 48) 00:09:31.335 13006.375 - 13107.200: 6.1843% ( 52) 00:09:31.335 13107.200 - 13208.025: 6.9590% ( 59) 00:09:31.335 13208.025 - 13308.849: 7.7994% ( 64) 00:09:31.335 13308.849 - 13409.674: 8.6397% ( 64) 00:09:31.335 13409.674 - 13510.498: 9.4538% ( 62) 00:09:31.335 13510.498 - 13611.323: 10.3598% ( 69) 00:09:31.335 13611.323 - 13712.148: 11.3314% ( 74) 00:09:31.335 13712.148 - 13812.972: 12.4343% ( 84) 00:09:31.335 13812.972 - 13913.797: 13.4454% ( 77) 00:09:31.335 13913.797 - 14014.622: 14.4695% ( 78) 00:09:31.335 14014.622 - 14115.446: 15.5068% ( 79) 00:09:31.335 14115.446 - 14216.271: 16.5179% ( 77) 00:09:31.335 14216.271 - 14317.095: 17.6471% ( 86) 00:09:31.335 14317.095 - 14417.920: 18.8944% ( 95) 00:09:31.335 14417.920 - 14518.745: 20.2731% ( 105) 00:09:31.335 14518.745 - 14619.569: 21.8881% ( 123) 00:09:31.335 14619.569 - 14720.394: 23.3718% ( 113) 00:09:31.335 14720.394 - 14821.218: 24.7768% ( 107) 00:09:31.335 14821.218 - 14922.043: 26.2211% ( 110) 00:09:31.335 14922.043 - 15022.868: 27.7048% ( 113) 00:09:31.335 15022.868 - 15123.692: 29.3067% ( 122) 00:09:31.335 15123.692 - 15224.517: 31.0005% ( 129) 00:09:31.335 15224.517 - 15325.342: 32.5236% ( 116) 00:09:31.335 15325.342 - 15426.166: 34.0730% ( 118) 00:09:31.335 15426.166 - 15526.991: 35.6092% ( 117) 00:09:31.335 15526.991 - 15627.815: 37.1192% ( 115) 00:09:31.335 15627.815 - 15728.640: 38.9181% ( 137) 00:09:31.335 15728.640 - 15829.465: 40.6381% ( 131) 00:09:31.335 15829.465 - 15930.289: 42.4107% ( 135) 00:09:31.335 15930.289 - 16031.114: 44.1045% ( 129) 00:09:31.335 16031.114 - 16131.938: 45.7327% ( 124) 00:09:31.335 16131.938 - 16232.763: 47.2952% ( 119) 00:09:31.335 16232.763 - 16333.588: 49.0021% ( 130) 00:09:31.335 16333.588 - 16434.412: 50.5777% ( 120) 00:09:31.335 16434.412 - 16535.237: 52.0483% ( 112) 00:09:31.335 16535.237 - 16636.062: 53.4139% ( 104) 00:09:31.335 16636.062 - 16736.886: 54.7269% ( 100) 00:09:31.335 16736.886 - 16837.711: 55.9611% ( 94) 00:09:31.335 16837.711 - 16938.535: 57.3004% ( 102) 00:09:31.335 16938.535 - 17039.360: 58.6266% ( 101) 00:09:31.335 17039.360 - 17140.185: 59.8346% ( 92) 00:09:31.335 17140.185 - 17241.009: 61.1476% ( 100) 00:09:31.335 17241.009 - 17341.834: 62.4212% ( 97) 00:09:31.335 17341.834 - 17442.658: 63.7211% ( 99) 00:09:31.335 17442.658 - 17543.483: 65.1129% ( 106) 00:09:31.335 17543.483 - 17644.308: 66.3866% ( 97) 00:09:31.335 17644.308 - 17745.132: 67.6471% ( 96) 00:09:31.335 17745.132 - 17845.957: 68.8157% ( 89) 00:09:31.335 17845.957 - 17946.782: 70.0762% ( 96) 00:09:31.335 17946.782 - 18047.606: 71.2185% ( 87) 00:09:31.335 18047.606 - 18148.431: 72.3739% ( 88) 00:09:31.335 18148.431 - 18249.255: 73.5557% ( 90) 00:09:31.335 18249.255 - 18350.080: 74.7637% ( 92) 00:09:31.335 18350.080 - 18450.905: 75.9716% ( 92) 00:09:31.335 18450.905 - 18551.729: 77.4422% ( 112) 00:09:31.335 18551.729 - 18652.554: 78.7290% ( 98) 00:09:31.335 18652.554 - 18753.378: 80.2127% ( 113) 00:09:31.335 18753.378 - 18854.203: 81.5783% ( 104) 00:09:31.335 18854.203 - 18955.028: 82.9963% ( 108) 00:09:31.335 18955.028 - 19055.852: 84.1780% ( 90) 00:09:31.335 19055.852 - 19156.677: 85.4123% ( 94) 00:09:31.335 19156.677 - 19257.502: 86.4758% ( 81) 00:09:31.335 19257.502 - 19358.326: 87.5263% ( 80) 00:09:31.335 19358.326 - 19459.151: 88.4585% ( 71) 00:09:31.335 19459.151 - 19559.975: 89.4301% ( 74) 00:09:31.335 19559.975 - 19660.800: 90.4018% ( 74) 00:09:31.335 19660.800 - 19761.625: 91.1765% ( 59) 00:09:31.335 19761.625 - 19862.449: 91.8461% ( 51) 00:09:31.335 19862.449 - 19963.274: 92.4764% ( 48) 00:09:31.335 19963.274 - 20064.098: 93.0804% ( 46) 00:09:31.335 20064.098 - 20164.923: 93.6975% ( 47) 00:09:31.335 20164.923 - 20265.748: 94.2489% ( 42) 00:09:31.335 20265.748 - 20366.572: 94.7479% ( 38) 00:09:31.335 20366.572 - 20467.397: 95.1418% ( 30) 00:09:31.335 20467.397 - 20568.222: 95.5095% ( 28) 00:09:31.335 20568.222 - 20669.046: 95.8771% ( 28) 00:09:31.335 20669.046 - 20769.871: 96.0478% ( 13) 00:09:31.335 20769.871 - 20870.695: 96.2447% ( 15) 00:09:31.335 20870.695 - 20971.520: 96.4811% ( 18) 00:09:31.335 20971.520 - 21072.345: 96.7174% ( 18) 00:09:31.335 21072.345 - 21173.169: 96.9538% ( 18) 00:09:31.335 21173.169 - 21273.994: 97.1770% ( 17) 00:09:31.335 21273.994 - 21374.818: 97.3739% ( 15) 00:09:31.335 21374.818 - 21475.643: 97.5315% ( 12) 00:09:31.335 21475.643 - 21576.468: 97.6759% ( 11) 00:09:31.335 21576.468 - 21677.292: 97.7810% ( 8) 00:09:31.335 21677.292 - 21778.117: 97.9123% ( 10) 00:09:31.335 21778.117 - 21878.942: 98.0173% ( 8) 00:09:31.335 21878.942 - 21979.766: 98.0830% ( 5) 00:09:31.335 21979.766 - 22080.591: 98.1355% ( 4) 00:09:31.335 22080.591 - 22181.415: 98.1880% ( 4) 00:09:31.336 22181.415 - 22282.240: 98.2537% ( 5) 00:09:31.336 22282.240 - 22383.065: 98.3062% ( 4) 00:09:31.336 22383.065 - 22483.889: 98.3193% ( 1) 00:09:31.336 30247.385 - 30449.034: 98.3456% ( 2) 00:09:31.336 30449.034 - 30650.683: 98.4506% ( 8) 00:09:31.336 30650.683 - 30852.332: 98.5557% ( 8) 00:09:31.336 30852.332 - 31053.982: 98.6607% ( 8) 00:09:31.336 31053.982 - 31255.631: 98.7658% ( 8) 00:09:31.336 31255.631 - 31457.280: 98.8708% ( 8) 00:09:31.336 31457.280 - 31658.929: 98.9758% ( 8) 00:09:31.336 31658.929 - 31860.578: 99.0678% ( 7) 00:09:31.336 31860.578 - 32062.228: 99.1597% ( 7) 00:09:31.336 40531.495 - 40733.145: 99.2516% ( 7) 00:09:31.336 40733.145 - 40934.794: 99.3566% ( 8) 00:09:31.336 40934.794 - 41136.443: 99.4485% ( 7) 00:09:31.336 41136.443 - 41338.092: 99.5404% ( 7) 00:09:31.336 41338.092 - 41539.742: 99.6324% ( 7) 00:09:31.336 41539.742 - 41741.391: 99.7374% ( 8) 00:09:31.336 41741.391 - 41943.040: 99.8424% ( 8) 00:09:31.336 41943.040 - 42144.689: 99.9606% ( 9) 00:09:31.336 42144.689 - 42346.338: 100.0000% ( 3) 00:09:31.336 00:09:31.336 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:31.336 ============================================================================== 00:09:31.336 Range in us Cumulative IO count 00:09:31.336 8267.618 - 8318.031: 0.0263% ( 2) 00:09:31.336 8318.031 - 8368.443: 0.0657% ( 3) 00:09:31.336 8368.443 - 8418.855: 0.0919% ( 2) 00:09:31.336 8418.855 - 8469.268: 0.1313% ( 3) 00:09:31.336 8469.268 - 8519.680: 0.1576% ( 2) 00:09:31.336 8519.680 - 8570.092: 0.1838% ( 2) 00:09:31.336 8570.092 - 8620.505: 0.2101% ( 2) 00:09:31.336 8620.505 - 8670.917: 0.2495% ( 3) 00:09:31.336 8670.917 - 8721.329: 0.2757% ( 2) 00:09:31.336 8721.329 - 8771.742: 0.3151% ( 3) 00:09:31.336 8771.742 - 8822.154: 0.3414% ( 2) 00:09:31.336 8822.154 - 8872.566: 0.3808% ( 3) 00:09:31.336 8872.566 - 8922.978: 0.4202% ( 3) 00:09:31.336 8922.978 - 8973.391: 0.4596% ( 3) 00:09:31.336 8973.391 - 9023.803: 0.4989% ( 3) 00:09:31.336 9023.803 - 9074.215: 0.5252% ( 2) 00:09:31.336 9074.215 - 9124.628: 0.5646% ( 3) 00:09:31.336 9124.628 - 9175.040: 0.6040% ( 3) 00:09:31.336 9175.040 - 9225.452: 0.6434% ( 3) 00:09:31.336 9225.452 - 9275.865: 0.6828% ( 3) 00:09:31.336 9275.865 - 9326.277: 0.7222% ( 3) 00:09:31.336 9326.277 - 9376.689: 0.7616% ( 3) 00:09:31.336 9376.689 - 9427.102: 0.8009% ( 3) 00:09:31.336 9427.102 - 9477.514: 0.8272% ( 2) 00:09:31.336 9477.514 - 9527.926: 0.8403% ( 1) 00:09:31.336 10082.462 - 10132.874: 0.8535% ( 1) 00:09:31.336 10132.874 - 10183.286: 0.8797% ( 2) 00:09:31.336 10183.286 - 10233.698: 0.9454% ( 5) 00:09:31.336 10233.698 - 10284.111: 0.9979% ( 4) 00:09:31.336 10284.111 - 10334.523: 1.0636% ( 5) 00:09:31.336 10334.523 - 10384.935: 1.0898% ( 2) 00:09:31.336 10384.935 - 10435.348: 1.1292% ( 3) 00:09:31.336 10435.348 - 10485.760: 1.1423% ( 1) 00:09:31.336 10485.760 - 10536.172: 1.1817% ( 3) 00:09:31.336 10536.172 - 10586.585: 1.2080% ( 2) 00:09:31.336 10586.585 - 10636.997: 1.2342% ( 2) 00:09:31.336 10636.997 - 10687.409: 1.2736% ( 3) 00:09:31.336 10687.409 - 10737.822: 1.2999% ( 2) 00:09:31.336 10737.822 - 10788.234: 1.3393% ( 3) 00:09:31.336 10788.234 - 10838.646: 1.3655% ( 2) 00:09:31.336 10838.646 - 10889.058: 1.4049% ( 3) 00:09:31.336 10889.058 - 10939.471: 1.4443% ( 3) 00:09:31.336 10939.471 - 10989.883: 1.4706% ( 2) 00:09:31.336 10989.883 - 11040.295: 1.4968% ( 2) 00:09:31.336 11040.295 - 11090.708: 1.5231% ( 2) 00:09:31.336 11090.708 - 11141.120: 1.5625% ( 3) 00:09:31.336 11141.120 - 11191.532: 1.5888% ( 2) 00:09:31.336 11191.532 - 11241.945: 1.6150% ( 2) 00:09:31.336 11241.945 - 11292.357: 1.6413% ( 2) 00:09:31.336 11292.357 - 11342.769: 1.6675% ( 2) 00:09:31.336 11342.769 - 11393.182: 1.6807% ( 1) 00:09:31.336 11393.182 - 11443.594: 1.6938% ( 1) 00:09:31.336 11443.594 - 11494.006: 1.7332% ( 3) 00:09:31.336 11494.006 - 11544.418: 1.7595% ( 2) 00:09:31.336 11544.418 - 11594.831: 1.7857% ( 2) 00:09:31.336 11594.831 - 11645.243: 1.7988% ( 1) 00:09:31.336 11645.243 - 11695.655: 1.8382% ( 3) 00:09:31.336 11695.655 - 11746.068: 1.8645% ( 2) 00:09:31.336 11746.068 - 11796.480: 1.9039% ( 3) 00:09:31.336 11796.480 - 11846.892: 1.9301% ( 2) 00:09:31.336 11846.892 - 11897.305: 1.9564% ( 2) 00:09:31.336 11897.305 - 11947.717: 1.9827% ( 2) 00:09:31.336 11947.717 - 11998.129: 2.0221% ( 3) 00:09:31.336 11998.129 - 12048.542: 2.0746% ( 4) 00:09:31.336 12048.542 - 12098.954: 2.2059% ( 10) 00:09:31.336 12098.954 - 12149.366: 2.3372% ( 10) 00:09:31.336 12149.366 - 12199.778: 2.4291% ( 7) 00:09:31.336 12199.778 - 12250.191: 2.5867% ( 12) 00:09:31.336 12250.191 - 12300.603: 2.7442% ( 12) 00:09:31.336 12300.603 - 12351.015: 2.9018% ( 12) 00:09:31.336 12351.015 - 12401.428: 3.0593% ( 12) 00:09:31.336 12401.428 - 12451.840: 3.1907% ( 10) 00:09:31.336 12451.840 - 12502.252: 3.3482% ( 12) 00:09:31.336 12502.252 - 12552.665: 3.5583% ( 16) 00:09:31.336 12552.665 - 12603.077: 3.7159% ( 12) 00:09:31.336 12603.077 - 12653.489: 3.8209% ( 8) 00:09:31.336 12653.489 - 12703.902: 3.9653% ( 11) 00:09:31.336 12703.902 - 12754.314: 4.1360% ( 13) 00:09:31.336 12754.314 - 12804.726: 4.3986% ( 20) 00:09:31.336 12804.726 - 12855.138: 4.6350% ( 18) 00:09:31.336 12855.138 - 12905.551: 4.8976% ( 20) 00:09:31.336 12905.551 - 13006.375: 5.4884% ( 45) 00:09:31.336 13006.375 - 13107.200: 6.0793% ( 45) 00:09:31.336 13107.200 - 13208.025: 6.7227% ( 49) 00:09:31.336 13208.025 - 13308.849: 7.5368% ( 62) 00:09:31.336 13308.849 - 13409.674: 8.3377% ( 61) 00:09:31.336 13409.674 - 13510.498: 9.1518% ( 62) 00:09:31.336 13510.498 - 13611.323: 10.2285% ( 82) 00:09:31.336 13611.323 - 13712.148: 11.3445% ( 85) 00:09:31.336 13712.148 - 13812.972: 12.4343% ( 83) 00:09:31.336 13812.972 - 13913.797: 13.4716% ( 79) 00:09:31.336 13913.797 - 14014.622: 14.6008% ( 86) 00:09:31.336 14014.622 - 14115.446: 15.7169% ( 85) 00:09:31.336 14115.446 - 14216.271: 16.9905% ( 97) 00:09:31.336 14216.271 - 14317.095: 18.3036% ( 100) 00:09:31.336 14317.095 - 14417.920: 19.7216% ( 108) 00:09:31.336 14417.920 - 14518.745: 21.0347% ( 100) 00:09:31.336 14518.745 - 14619.569: 22.3608% ( 101) 00:09:31.336 14619.569 - 14720.394: 23.5688% ( 92) 00:09:31.336 14720.394 - 14821.218: 24.8293% ( 96) 00:09:31.336 14821.218 - 14922.043: 26.3787% ( 118) 00:09:31.336 14922.043 - 15022.868: 27.9937% ( 123) 00:09:31.336 15022.868 - 15123.692: 29.6875% ( 129) 00:09:31.336 15123.692 - 15224.517: 31.4076% ( 131) 00:09:31.336 15224.517 - 15325.342: 32.9438% ( 117) 00:09:31.336 15325.342 - 15426.166: 34.2306% ( 98) 00:09:31.336 15426.166 - 15526.991: 35.5173% ( 98) 00:09:31.336 15526.991 - 15627.815: 36.8697% ( 103) 00:09:31.336 15627.815 - 15728.640: 38.5898% ( 131) 00:09:31.336 15728.640 - 15829.465: 40.2705% ( 128) 00:09:31.336 15829.465 - 15930.289: 42.3451% ( 158) 00:09:31.336 15930.289 - 16031.114: 44.2096% ( 142) 00:09:31.336 16031.114 - 16131.938: 46.0478% ( 140) 00:09:31.336 16131.938 - 16232.763: 47.6759% ( 124) 00:09:31.336 16232.763 - 16333.588: 49.1334% ( 111) 00:09:31.336 16333.588 - 16434.412: 50.6565% ( 116) 00:09:31.336 16434.412 - 16535.237: 52.2453% ( 121) 00:09:31.336 16535.237 - 16636.062: 53.9522% ( 130) 00:09:31.336 16636.062 - 16736.886: 55.4359% ( 113) 00:09:31.336 16736.886 - 16837.711: 56.9196% ( 113) 00:09:31.336 16837.711 - 16938.535: 58.3640% ( 110) 00:09:31.336 16938.535 - 17039.360: 59.8083% ( 110) 00:09:31.336 17039.360 - 17140.185: 61.0425% ( 94) 00:09:31.336 17140.185 - 17241.009: 62.1324% ( 83) 00:09:31.336 17241.009 - 17341.834: 63.2353% ( 84) 00:09:31.336 17341.834 - 17442.658: 64.2726% ( 79) 00:09:31.336 17442.658 - 17543.483: 65.3755% ( 84) 00:09:31.336 17543.483 - 17644.308: 66.4916% ( 85) 00:09:31.336 17644.308 - 17745.132: 67.6733% ( 90) 00:09:31.336 17745.132 - 17845.957: 68.9995% ( 101) 00:09:31.336 17845.957 - 17946.782: 70.3256% ( 101) 00:09:31.336 17946.782 - 18047.606: 71.6649% ( 102) 00:09:31.336 18047.606 - 18148.431: 72.9254% ( 96) 00:09:31.336 18148.431 - 18249.255: 74.0809% ( 88) 00:09:31.336 18249.255 - 18350.080: 75.2757% ( 91) 00:09:31.337 18350.080 - 18450.905: 76.3524% ( 82) 00:09:31.337 18450.905 - 18551.729: 77.4816% ( 86) 00:09:31.337 18551.729 - 18652.554: 78.8340% ( 103) 00:09:31.337 18652.554 - 18753.378: 80.1208% ( 98) 00:09:31.337 18753.378 - 18854.203: 81.4076% ( 98) 00:09:31.337 18854.203 - 18955.028: 82.7468% ( 102) 00:09:31.337 18955.028 - 19055.852: 83.9680% ( 93) 00:09:31.337 19055.852 - 19156.677: 84.9921% ( 78) 00:09:31.337 19156.677 - 19257.502: 86.0163% ( 78) 00:09:31.337 19257.502 - 19358.326: 87.0273% ( 77) 00:09:31.337 19358.326 - 19459.151: 87.8939% ( 66) 00:09:31.337 19459.151 - 19559.975: 88.6423% ( 57) 00:09:31.337 19559.975 - 19660.800: 89.3645% ( 55) 00:09:31.337 19660.800 - 19761.625: 90.0998% ( 56) 00:09:31.337 19761.625 - 19862.449: 90.9401% ( 64) 00:09:31.337 19862.449 - 19963.274: 91.7017% ( 58) 00:09:31.337 19963.274 - 20064.098: 92.5289% ( 63) 00:09:31.337 20064.098 - 20164.923: 93.1591% ( 48) 00:09:31.337 20164.923 - 20265.748: 93.6712% ( 39) 00:09:31.337 20265.748 - 20366.572: 94.1308% ( 35) 00:09:31.337 20366.572 - 20467.397: 94.5509% ( 32) 00:09:31.337 20467.397 - 20568.222: 94.9974% ( 34) 00:09:31.337 20568.222 - 20669.046: 95.5095% ( 39) 00:09:31.337 20669.046 - 20769.871: 95.8771% ( 28) 00:09:31.337 20769.871 - 20870.695: 96.2579% ( 29) 00:09:31.337 20870.695 - 20971.520: 96.5730% ( 24) 00:09:31.337 20971.520 - 21072.345: 96.8881% ( 24) 00:09:31.337 21072.345 - 21173.169: 97.1113% ( 17) 00:09:31.337 21173.169 - 21273.994: 97.3346% ( 17) 00:09:31.337 21273.994 - 21374.818: 97.5053% ( 13) 00:09:31.337 21374.818 - 21475.643: 97.7022% ( 15) 00:09:31.337 21475.643 - 21576.468: 97.9123% ( 16) 00:09:31.337 21576.468 - 21677.292: 98.0830% ( 13) 00:09:31.337 21677.292 - 21778.117: 98.1880% ( 8) 00:09:31.337 21778.117 - 21878.942: 98.2537% ( 5) 00:09:31.337 21878.942 - 21979.766: 98.3062% ( 4) 00:09:31.337 21979.766 - 22080.591: 98.3193% ( 1) 00:09:31.337 28835.840 - 29037.489: 98.3456% ( 2) 00:09:31.337 29037.489 - 29239.138: 98.4244% ( 6) 00:09:31.337 29239.138 - 29440.788: 98.4900% ( 5) 00:09:31.337 29440.788 - 29642.437: 98.5557% ( 5) 00:09:31.337 29642.437 - 29844.086: 98.6213% ( 5) 00:09:31.337 29844.086 - 30045.735: 98.6870% ( 5) 00:09:31.337 30045.735 - 30247.385: 98.7526% ( 5) 00:09:31.337 30247.385 - 30449.034: 98.8183% ( 5) 00:09:31.337 30449.034 - 30650.683: 98.8971% ( 6) 00:09:31.337 30650.683 - 30852.332: 98.9758% ( 6) 00:09:31.337 30852.332 - 31053.982: 99.0678% ( 7) 00:09:31.337 31053.982 - 31255.631: 99.1465% ( 6) 00:09:31.337 31255.631 - 31457.280: 99.1597% ( 1) 00:09:31.337 39523.249 - 39724.898: 99.2253% ( 5) 00:09:31.337 39724.898 - 39926.548: 99.3041% ( 6) 00:09:31.337 39926.548 - 40128.197: 99.3697% ( 5) 00:09:31.337 40128.197 - 40329.846: 99.4617% ( 7) 00:09:31.337 40329.846 - 40531.495: 99.5667% ( 8) 00:09:31.337 40531.495 - 40733.145: 99.6849% ( 9) 00:09:31.337 40733.145 - 40934.794: 99.7899% ( 8) 00:09:31.337 40934.794 - 41136.443: 99.8687% ( 6) 00:09:31.337 41136.443 - 41338.092: 99.9737% ( 8) 00:09:31.337 41338.092 - 41539.742: 100.0000% ( 2) 00:09:31.337 00:09:31.337 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:31.337 ============================================================================== 00:09:31.337 Range in us Cumulative IO count 00:09:31.337 8217.206 - 8267.618: 0.0651% ( 5) 00:09:31.337 8267.618 - 8318.031: 0.0911% ( 2) 00:09:31.337 8318.031 - 8368.443: 0.1302% ( 3) 00:09:31.337 8368.443 - 8418.855: 0.1693% ( 3) 00:09:31.337 8418.855 - 8469.268: 0.2083% ( 3) 00:09:31.337 8469.268 - 8519.680: 0.2474% ( 3) 00:09:31.337 8519.680 - 8570.092: 0.2865% ( 3) 00:09:31.337 8570.092 - 8620.505: 0.3125% ( 2) 00:09:31.337 8620.505 - 8670.917: 0.3516% ( 3) 00:09:31.337 8670.917 - 8721.329: 0.3906% ( 3) 00:09:31.337 8721.329 - 8771.742: 0.4297% ( 3) 00:09:31.337 8771.742 - 8822.154: 0.4557% ( 2) 00:09:31.337 8822.154 - 8872.566: 0.4948% ( 3) 00:09:31.337 8872.566 - 8922.978: 0.5339% ( 3) 00:09:31.337 8922.978 - 8973.391: 0.5729% ( 3) 00:09:31.337 8973.391 - 9023.803: 0.6120% ( 3) 00:09:31.337 9023.803 - 9074.215: 0.6510% ( 3) 00:09:31.337 9074.215 - 9124.628: 0.6901% ( 3) 00:09:31.337 9124.628 - 9175.040: 0.7292% ( 3) 00:09:31.337 9175.040 - 9225.452: 0.7682% ( 3) 00:09:31.337 9225.452 - 9275.865: 0.8073% ( 3) 00:09:31.337 9275.865 - 9326.277: 0.8203% ( 1) 00:09:31.337 9326.277 - 9376.689: 0.8333% ( 1) 00:09:31.337 10132.874 - 10183.286: 0.8724% ( 3) 00:09:31.337 10183.286 - 10233.698: 0.9245% ( 4) 00:09:31.337 10233.698 - 10284.111: 0.9505% ( 2) 00:09:31.337 10284.111 - 10334.523: 0.9766% ( 2) 00:09:31.337 10334.523 - 10384.935: 1.0156% ( 3) 00:09:31.337 10384.935 - 10435.348: 1.0547% ( 3) 00:09:31.337 10435.348 - 10485.760: 1.0938% ( 3) 00:09:31.337 10485.760 - 10536.172: 1.1328% ( 3) 00:09:31.337 10536.172 - 10586.585: 1.1589% ( 2) 00:09:31.337 10586.585 - 10636.997: 1.1979% ( 3) 00:09:31.337 10636.997 - 10687.409: 1.2240% ( 2) 00:09:31.337 10687.409 - 10737.822: 1.2630% ( 3) 00:09:31.337 10737.822 - 10788.234: 1.2760% ( 1) 00:09:31.337 10788.234 - 10838.646: 1.3151% ( 3) 00:09:31.337 10838.646 - 10889.058: 1.3542% ( 3) 00:09:31.337 10889.058 - 10939.471: 1.3932% ( 3) 00:09:31.337 10939.471 - 10989.883: 1.4193% ( 2) 00:09:31.337 10989.883 - 11040.295: 1.4583% ( 3) 00:09:31.337 11040.295 - 11090.708: 1.4974% ( 3) 00:09:31.337 11090.708 - 11141.120: 1.5234% ( 2) 00:09:31.337 11141.120 - 11191.532: 1.5625% ( 3) 00:09:31.337 11191.532 - 11241.945: 1.5885% ( 2) 00:09:31.337 11241.945 - 11292.357: 1.6146% ( 2) 00:09:31.337 11292.357 - 11342.769: 1.6406% ( 2) 00:09:31.337 11342.769 - 11393.182: 1.6667% ( 2) 00:09:31.337 11544.418 - 11594.831: 1.6797% ( 1) 00:09:31.337 11594.831 - 11645.243: 1.7448% ( 5) 00:09:31.337 11645.243 - 11695.655: 1.7708% ( 2) 00:09:31.337 11695.655 - 11746.068: 1.8099% ( 3) 00:09:31.337 11746.068 - 11796.480: 1.9010% ( 7) 00:09:31.337 11796.480 - 11846.892: 2.0443% ( 11) 00:09:31.337 11846.892 - 11897.305: 2.1094% ( 5) 00:09:31.337 11897.305 - 11947.717: 2.2005% ( 7) 00:09:31.337 11947.717 - 11998.129: 2.2656% ( 5) 00:09:31.337 11998.129 - 12048.542: 2.3568% ( 7) 00:09:31.337 12048.542 - 12098.954: 2.4479% ( 7) 00:09:31.337 12098.954 - 12149.366: 2.6042% ( 12) 00:09:31.337 12149.366 - 12199.778: 2.7474% ( 11) 00:09:31.337 12199.778 - 12250.191: 2.8906% ( 11) 00:09:31.337 12250.191 - 12300.603: 3.0469% ( 12) 00:09:31.337 12300.603 - 12351.015: 3.2161% ( 13) 00:09:31.337 12351.015 - 12401.428: 3.3724% ( 12) 00:09:31.337 12401.428 - 12451.840: 3.5286% ( 12) 00:09:31.337 12451.840 - 12502.252: 3.6719% ( 11) 00:09:31.337 12502.252 - 12552.665: 3.8281% ( 12) 00:09:31.337 12552.665 - 12603.077: 3.9714% ( 11) 00:09:31.337 12603.077 - 12653.489: 4.1406% ( 13) 00:09:31.337 12653.489 - 12703.902: 4.2578% ( 9) 00:09:31.337 12703.902 - 12754.314: 4.3750% ( 9) 00:09:31.337 12754.314 - 12804.726: 4.5182% ( 11) 00:09:31.337 12804.726 - 12855.138: 4.6484% ( 10) 00:09:31.337 12855.138 - 12905.551: 4.8177% ( 13) 00:09:31.337 12905.551 - 13006.375: 5.3125% ( 38) 00:09:31.337 13006.375 - 13107.200: 5.8464% ( 41) 00:09:31.337 13107.200 - 13208.025: 6.4453% ( 46) 00:09:31.337 13208.025 - 13308.849: 6.9792% ( 41) 00:09:31.337 13308.849 - 13409.674: 7.7474% ( 59) 00:09:31.337 13409.674 - 13510.498: 8.4635% ( 55) 00:09:31.337 13510.498 - 13611.323: 9.3880% ( 71) 00:09:31.337 13611.323 - 13712.148: 10.4948% ( 85) 00:09:31.337 13712.148 - 13812.972: 11.7318% ( 95) 00:09:31.337 13812.972 - 13913.797: 12.8385% ( 85) 00:09:31.337 13913.797 - 14014.622: 14.1406% ( 100) 00:09:31.337 14014.622 - 14115.446: 15.4557% ( 101) 00:09:31.337 14115.446 - 14216.271: 16.8099% ( 104) 00:09:31.337 14216.271 - 14317.095: 18.1380% ( 102) 00:09:31.337 14317.095 - 14417.920: 19.4141% ( 98) 00:09:31.337 14417.920 - 14518.745: 20.7682% ( 104) 00:09:31.337 14518.745 - 14619.569: 22.1615% ( 107) 00:09:31.337 14619.569 - 14720.394: 23.6328% ( 113) 00:09:31.337 14720.394 - 14821.218: 25.1302% ( 115) 00:09:31.337 14821.218 - 14922.043: 26.7057% ( 121) 00:09:31.337 14922.043 - 15022.868: 28.1510% ( 111) 00:09:31.337 15022.868 - 15123.692: 29.7656% ( 124) 00:09:31.337 15123.692 - 15224.517: 31.3281% ( 120) 00:09:31.337 15224.517 - 15325.342: 33.0208% ( 130) 00:09:31.337 15325.342 - 15426.166: 34.8438% ( 140) 00:09:31.337 15426.166 - 15526.991: 36.7578% ( 147) 00:09:31.337 15526.991 - 15627.815: 38.5286% ( 136) 00:09:31.337 15627.815 - 15728.640: 40.1432% ( 124) 00:09:31.337 15728.640 - 15829.465: 41.9010% ( 135) 00:09:31.337 15829.465 - 15930.289: 43.6589% ( 135) 00:09:31.337 15930.289 - 16031.114: 45.5599% ( 146) 00:09:31.337 16031.114 - 16131.938: 47.1354% ( 121) 00:09:31.337 16131.938 - 16232.763: 48.6849% ( 119) 00:09:31.337 16232.763 - 16333.588: 50.1042% ( 109) 00:09:31.337 16333.588 - 16434.412: 51.2630% ( 89) 00:09:31.337 16434.412 - 16535.237: 52.4609% ( 92) 00:09:31.337 16535.237 - 16636.062: 53.8672% ( 108) 00:09:31.337 16636.062 - 16736.886: 55.2604% ( 107) 00:09:31.337 16736.886 - 16837.711: 56.6797% ( 109) 00:09:31.337 16837.711 - 16938.535: 58.1380% ( 112) 00:09:31.337 16938.535 - 17039.360: 59.5703% ( 110) 00:09:31.337 17039.360 - 17140.185: 60.8984% ( 102) 00:09:31.337 17140.185 - 17241.009: 62.3047% ( 108) 00:09:31.337 17241.009 - 17341.834: 63.5156% ( 93) 00:09:31.337 17341.834 - 17442.658: 64.5573% ( 80) 00:09:31.337 17442.658 - 17543.483: 65.6771% ( 86) 00:09:31.337 17543.483 - 17644.308: 67.0182% ( 103) 00:09:31.337 17644.308 - 17745.132: 68.1901% ( 90) 00:09:31.338 17745.132 - 17845.957: 69.4661% ( 98) 00:09:31.338 17845.957 - 17946.782: 70.7031% ( 95) 00:09:31.338 17946.782 - 18047.606: 71.9792% ( 98) 00:09:31.338 18047.606 - 18148.431: 73.0339% ( 81) 00:09:31.338 18148.431 - 18249.255: 74.1276% ( 84) 00:09:31.338 18249.255 - 18350.080: 75.2344% ( 85) 00:09:31.338 18350.080 - 18450.905: 76.4974% ( 97) 00:09:31.338 18450.905 - 18551.729: 77.6562% ( 89) 00:09:31.338 18551.729 - 18652.554: 78.7109% ( 81) 00:09:31.338 18652.554 - 18753.378: 79.7396% ( 79) 00:09:31.338 18753.378 - 18854.203: 80.8333% ( 84) 00:09:31.338 18854.203 - 18955.028: 81.9792% ( 88) 00:09:31.338 18955.028 - 19055.852: 83.0469% ( 82) 00:09:31.338 19055.852 - 19156.677: 84.1406% ( 84) 00:09:31.338 19156.677 - 19257.502: 85.2995% ( 89) 00:09:31.338 19257.502 - 19358.326: 86.3932% ( 84) 00:09:31.338 19358.326 - 19459.151: 87.4479% ( 81) 00:09:31.338 19459.151 - 19559.975: 88.6719% ( 94) 00:09:31.338 19559.975 - 19660.800: 89.7786% ( 85) 00:09:31.338 19660.800 - 19761.625: 90.7422% ( 74) 00:09:31.338 19761.625 - 19862.449: 91.5365% ( 61) 00:09:31.338 19862.449 - 19963.274: 92.2266% ( 53) 00:09:31.338 19963.274 - 20064.098: 92.7995% ( 44) 00:09:31.338 20064.098 - 20164.923: 93.3594% ( 43) 00:09:31.338 20164.923 - 20265.748: 93.9714% ( 47) 00:09:31.338 20265.748 - 20366.572: 94.5573% ( 45) 00:09:31.338 20366.572 - 20467.397: 95.0651% ( 39) 00:09:31.338 20467.397 - 20568.222: 95.5599% ( 38) 00:09:31.338 20568.222 - 20669.046: 96.0156% ( 35) 00:09:31.338 20669.046 - 20769.871: 96.3932% ( 29) 00:09:31.338 20769.871 - 20870.695: 96.7969% ( 31) 00:09:31.338 20870.695 - 20971.520: 97.2266% ( 33) 00:09:31.338 20971.520 - 21072.345: 97.5130% ( 22) 00:09:31.338 21072.345 - 21173.169: 97.7734% ( 20) 00:09:31.338 21173.169 - 21273.994: 97.9427% ( 13) 00:09:31.338 21273.994 - 21374.818: 98.0339% ( 7) 00:09:31.338 21374.818 - 21475.643: 98.1510% ( 9) 00:09:31.338 21475.643 - 21576.468: 98.2552% ( 8) 00:09:31.338 21576.468 - 21677.292: 98.3333% ( 6) 00:09:31.338 21979.766 - 22080.591: 98.3464% ( 1) 00:09:31.338 22080.591 - 22181.415: 98.3854% ( 3) 00:09:31.338 22181.415 - 22282.240: 98.4375% ( 4) 00:09:31.338 22282.240 - 22383.065: 98.4896% ( 4) 00:09:31.338 22383.065 - 22483.889: 98.5417% ( 4) 00:09:31.338 22483.889 - 22584.714: 98.5938% ( 4) 00:09:31.338 22584.714 - 22685.538: 98.6589% ( 5) 00:09:31.338 22685.538 - 22786.363: 98.7109% ( 4) 00:09:31.338 22786.363 - 22887.188: 98.7630% ( 4) 00:09:31.338 22887.188 - 22988.012: 98.8151% ( 4) 00:09:31.338 22988.012 - 23088.837: 98.8672% ( 4) 00:09:31.338 23088.837 - 23189.662: 98.9193% ( 4) 00:09:31.338 23189.662 - 23290.486: 98.9714% ( 4) 00:09:31.338 23290.486 - 23391.311: 99.0234% ( 4) 00:09:31.338 23391.311 - 23492.135: 99.0755% ( 4) 00:09:31.338 23492.135 - 23592.960: 99.1276% ( 4) 00:09:31.338 23592.960 - 23693.785: 99.1667% ( 3) 00:09:31.338 29037.489 - 29239.138: 99.2448% ( 6) 00:09:31.338 29239.138 - 29440.788: 99.3229% ( 6) 00:09:31.338 29440.788 - 29642.437: 99.4010% ( 6) 00:09:31.338 29642.437 - 29844.086: 99.4661% ( 5) 00:09:31.338 29844.086 - 30045.735: 99.5312% ( 5) 00:09:31.338 30045.735 - 30247.385: 99.6094% ( 6) 00:09:31.338 30247.385 - 30449.034: 99.6745% ( 5) 00:09:31.338 30449.034 - 30650.683: 99.7396% ( 5) 00:09:31.338 30650.683 - 30852.332: 99.8047% ( 5) 00:09:31.338 30852.332 - 31053.982: 99.8828% ( 6) 00:09:31.338 31053.982 - 31255.631: 99.9609% ( 6) 00:09:31.338 31255.631 - 31457.280: 100.0000% ( 3) 00:09:31.338 00:09:31.338 19:27:58 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:32.724 Initializing NVMe Controllers 00:09:32.724 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:32.724 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:32.724 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:32.724 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:32.724 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:32.724 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:32.724 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:32.724 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:32.724 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:32.724 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:32.724 Initialization complete. Launching workers. 00:09:32.724 ======================================================== 00:09:32.724 Latency(us) 00:09:32.724 Device Information : IOPS MiB/s Average min max 00:09:32.724 PCIE (0000:00:13.0) NSID 1 from core 0: 9303.93 109.03 13777.94 9238.55 38132.03 00:09:32.724 PCIE (0000:00:10.0) NSID 1 from core 0: 9303.93 109.03 13755.86 9084.01 36845.77 00:09:32.724 PCIE (0000:00:11.0) NSID 1 from core 0: 9303.93 109.03 13732.91 9221.53 34846.75 00:09:32.724 PCIE (0000:00:12.0) NSID 1 from core 0: 9303.93 109.03 13711.14 8564.19 34325.88 00:09:32.724 PCIE (0000:00:12.0) NSID 2 from core 0: 9303.93 109.03 13689.64 8577.19 32857.46 00:09:32.724 PCIE (0000:00:12.0) NSID 3 from core 0: 9303.93 109.03 13668.00 8685.47 30905.48 00:09:32.724 ======================================================== 00:09:32.724 Total : 55823.57 654.18 13722.58 8564.19 38132.03 00:09:32.724 00:09:32.724 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:32.724 ================================================================================= 00:09:32.724 1.00000% : 9628.751us 00:09:32.724 10.00000% : 10384.935us 00:09:32.724 25.00000% : 11645.243us 00:09:32.724 50.00000% : 12905.551us 00:09:32.724 75.00000% : 15627.815us 00:09:32.724 90.00000% : 17140.185us 00:09:32.724 95.00000% : 18955.028us 00:09:32.724 98.00000% : 21072.345us 00:09:32.724 99.00000% : 28230.892us 00:09:32.724 99.50000% : 36700.160us 00:09:32.724 99.90000% : 37910.055us 00:09:32.724 99.99000% : 38313.354us 00:09:32.724 99.99900% : 38313.354us 00:09:32.724 99.99990% : 38313.354us 00:09:32.724 99.99999% : 38313.354us 00:09:32.724 00:09:32.724 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:32.725 ================================================================================= 00:09:32.725 1.00000% : 9477.514us 00:09:32.725 10.00000% : 10435.348us 00:09:32.725 25.00000% : 11645.243us 00:09:32.725 50.00000% : 13006.375us 00:09:32.725 75.00000% : 15526.991us 00:09:32.725 90.00000% : 17341.834us 00:09:32.725 95.00000% : 18955.028us 00:09:32.725 98.00000% : 20769.871us 00:09:32.725 99.00000% : 28029.243us 00:09:32.725 99.50000% : 35490.265us 00:09:32.725 99.90000% : 36700.160us 00:09:32.725 99.99000% : 36901.809us 00:09:32.725 99.99900% : 36901.809us 00:09:32.725 99.99990% : 36901.809us 00:09:32.725 99.99999% : 36901.809us 00:09:32.725 00:09:32.725 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:32.725 ================================================================================= 00:09:32.725 1.00000% : 9679.163us 00:09:32.725 10.00000% : 10384.935us 00:09:32.725 25.00000% : 11695.655us 00:09:32.725 50.00000% : 12804.726us 00:09:32.725 75.00000% : 15426.166us 00:09:32.725 90.00000% : 17442.658us 00:09:32.725 95.00000% : 19055.852us 00:09:32.725 98.00000% : 20870.695us 00:09:32.725 99.00000% : 26617.698us 00:09:32.725 99.50000% : 33877.071us 00:09:32.725 99.90000% : 34683.668us 00:09:32.725 99.99000% : 34885.317us 00:09:32.725 99.99900% : 34885.317us 00:09:32.725 99.99990% : 34885.317us 00:09:32.725 99.99999% : 34885.317us 00:09:32.725 00:09:32.725 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:32.725 ================================================================================= 00:09:32.725 1.00000% : 9527.926us 00:09:32.725 10.00000% : 10435.348us 00:09:32.725 25.00000% : 11594.831us 00:09:32.725 50.00000% : 12855.138us 00:09:32.725 75.00000% : 15325.342us 00:09:32.725 90.00000% : 17442.658us 00:09:32.725 95.00000% : 19358.326us 00:09:32.725 98.00000% : 21677.292us 00:09:32.725 99.00000% : 26214.400us 00:09:32.725 99.50000% : 33272.123us 00:09:32.725 99.90000% : 34280.369us 00:09:32.725 99.99000% : 34482.018us 00:09:32.725 99.99900% : 34482.018us 00:09:32.725 99.99990% : 34482.018us 00:09:32.725 99.99999% : 34482.018us 00:09:32.725 00:09:32.725 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:32.725 ================================================================================= 00:09:32.725 1.00000% : 9527.926us 00:09:32.725 10.00000% : 10384.935us 00:09:32.725 25.00000% : 11544.418us 00:09:32.725 50.00000% : 12804.726us 00:09:32.725 75.00000% : 15426.166us 00:09:32.725 90.00000% : 17341.834us 00:09:32.725 95.00000% : 19156.677us 00:09:32.725 98.00000% : 21778.117us 00:09:32.725 99.00000% : 24903.680us 00:09:32.725 99.50000% : 31658.929us 00:09:32.725 99.90000% : 32667.175us 00:09:32.725 99.99000% : 32868.825us 00:09:32.725 99.99900% : 32868.825us 00:09:32.725 99.99990% : 32868.825us 00:09:32.725 99.99999% : 32868.825us 00:09:32.725 00:09:32.725 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:32.725 ================================================================================= 00:09:32.725 1.00000% : 9527.926us 00:09:32.725 10.00000% : 10334.523us 00:09:32.725 25.00000% : 11544.418us 00:09:32.725 50.00000% : 12855.138us 00:09:32.725 75.00000% : 15426.166us 00:09:32.725 90.00000% : 17241.009us 00:09:32.725 95.00000% : 18854.203us 00:09:32.725 98.00000% : 21374.818us 00:09:32.725 99.00000% : 23895.434us 00:09:32.725 99.50000% : 29037.489us 00:09:32.725 99.90000% : 30852.332us 00:09:32.725 99.99000% : 31053.982us 00:09:32.725 99.99900% : 31053.982us 00:09:32.725 99.99990% : 31053.982us 00:09:32.725 99.99999% : 31053.982us 00:09:32.725 00:09:32.725 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:32.725 ============================================================================== 00:09:32.725 Range in us Cumulative IO count 00:09:32.725 9225.452 - 9275.865: 0.0321% ( 3) 00:09:32.725 9275.865 - 9326.277: 0.0642% ( 3) 00:09:32.725 9326.277 - 9376.689: 0.1070% ( 4) 00:09:32.725 9376.689 - 9427.102: 0.1605% ( 5) 00:09:32.725 9427.102 - 9477.514: 0.2354% ( 7) 00:09:32.725 9477.514 - 9527.926: 0.3853% ( 14) 00:09:32.725 9527.926 - 9578.338: 0.6421% ( 24) 00:09:32.725 9578.338 - 9628.751: 1.0381% ( 37) 00:09:32.725 9628.751 - 9679.163: 1.5304% ( 46) 00:09:32.725 9679.163 - 9729.575: 1.8729% ( 32) 00:09:32.725 9729.575 - 9779.988: 2.2795% ( 38) 00:09:32.725 9779.988 - 9830.400: 2.6862% ( 38) 00:09:32.725 9830.400 - 9880.812: 3.2320% ( 51) 00:09:32.725 9880.812 - 9931.225: 3.8955% ( 62) 00:09:32.725 9931.225 - 9981.637: 4.5484% ( 61) 00:09:32.725 9981.637 - 10032.049: 5.2975% ( 70) 00:09:32.725 10032.049 - 10082.462: 6.1644% ( 81) 00:09:32.725 10082.462 - 10132.874: 6.8921% ( 68) 00:09:32.725 10132.874 - 10183.286: 8.0158% ( 105) 00:09:32.725 10183.286 - 10233.698: 8.7008% ( 64) 00:09:32.725 10233.698 - 10284.111: 9.4392% ( 69) 00:09:32.725 10284.111 - 10334.523: 9.9743% ( 50) 00:09:32.725 10334.523 - 10384.935: 10.4773% ( 47) 00:09:32.725 10384.935 - 10435.348: 10.8733% ( 37) 00:09:32.725 10435.348 - 10485.760: 11.3656% ( 46) 00:09:32.725 10485.760 - 10536.172: 11.6866% ( 30) 00:09:32.725 10536.172 - 10586.585: 11.9435% ( 24) 00:09:32.725 10586.585 - 10636.997: 12.2324% ( 27) 00:09:32.725 10636.997 - 10687.409: 12.4465% ( 20) 00:09:32.725 10687.409 - 10737.822: 12.7997% ( 33) 00:09:32.725 10737.822 - 10788.234: 12.9923% ( 18) 00:09:32.725 10788.234 - 10838.646: 13.1421% ( 14) 00:09:32.725 10838.646 - 10889.058: 13.3669% ( 21) 00:09:32.725 10889.058 - 10939.471: 13.6772% ( 29) 00:09:32.725 10939.471 - 10989.883: 14.0839% ( 38) 00:09:32.725 10989.883 - 11040.295: 14.7260% ( 60) 00:09:32.725 11040.295 - 11090.708: 15.5180% ( 74) 00:09:32.725 11090.708 - 11141.120: 16.2564% ( 69) 00:09:32.725 11141.120 - 11191.532: 17.0270% ( 72) 00:09:32.725 11191.532 - 11241.945: 17.7654% ( 69) 00:09:32.725 11241.945 - 11292.357: 18.5895% ( 77) 00:09:32.725 11292.357 - 11342.769: 19.4563% ( 81) 00:09:32.725 11342.769 - 11393.182: 20.1948% ( 69) 00:09:32.725 11393.182 - 11443.594: 21.0295% ( 78) 00:09:32.725 11443.594 - 11494.006: 22.1104% ( 101) 00:09:32.725 11494.006 - 11544.418: 23.0201% ( 85) 00:09:32.725 11544.418 - 11594.831: 24.2723% ( 117) 00:09:32.725 11594.831 - 11645.243: 25.5672% ( 121) 00:09:32.725 11645.243 - 11695.655: 27.0976% ( 143) 00:09:32.725 11695.655 - 11746.068: 28.3390% ( 116) 00:09:32.725 11746.068 - 11796.480: 29.4521% ( 104) 00:09:32.725 11796.480 - 11846.892: 30.7363% ( 120) 00:09:32.725 11846.892 - 11897.305: 32.4914% ( 164) 00:09:32.725 11897.305 - 11947.717: 33.7971% ( 122) 00:09:32.725 11947.717 - 11998.129: 34.9957% ( 112) 00:09:32.725 11998.129 - 12048.542: 36.0338% ( 97) 00:09:32.725 12048.542 - 12098.954: 37.0719% ( 97) 00:09:32.725 12098.954 - 12149.366: 37.9816% ( 85) 00:09:32.725 12149.366 - 12199.778: 38.8378% ( 80) 00:09:32.725 12199.778 - 12250.191: 39.9508% ( 104) 00:09:32.725 12250.191 - 12300.603: 40.8176% ( 81) 00:09:32.725 12300.603 - 12351.015: 41.6524% ( 78) 00:09:32.725 12351.015 - 12401.428: 42.5835% ( 87) 00:09:32.725 12401.428 - 12451.840: 43.6216% ( 97) 00:09:32.725 12451.840 - 12502.252: 44.4135% ( 74) 00:09:32.725 12502.252 - 12552.665: 45.2483% ( 78) 00:09:32.725 12552.665 - 12603.077: 46.0295% ( 73) 00:09:32.725 12603.077 - 12653.489: 46.8750% ( 79) 00:09:32.725 12653.489 - 12703.902: 47.6134% ( 69) 00:09:32.725 12703.902 - 12754.314: 48.1807% ( 53) 00:09:32.725 12754.314 - 12804.726: 48.9191% ( 69) 00:09:32.725 12804.726 - 12855.138: 49.8609% ( 88) 00:09:32.725 12855.138 - 12905.551: 50.6421% ( 73) 00:09:32.725 12905.551 - 13006.375: 52.6220% ( 185) 00:09:32.725 13006.375 - 13107.200: 53.8527% ( 115) 00:09:32.725 13107.200 - 13208.025: 55.1584% ( 122) 00:09:32.725 13208.025 - 13308.849: 56.2393% ( 101) 00:09:32.725 13308.849 - 13409.674: 57.0955% ( 80) 00:09:32.725 13409.674 - 13510.498: 57.6092% ( 48) 00:09:32.725 13510.498 - 13611.323: 57.9088% ( 28) 00:09:32.725 13611.323 - 13712.148: 58.3262% ( 39) 00:09:32.725 13712.148 - 13812.972: 58.7008% ( 35) 00:09:32.725 13812.972 - 13913.797: 59.0646% ( 34) 00:09:32.725 13913.797 - 14014.622: 59.5034% ( 41) 00:09:32.725 14014.622 - 14115.446: 59.9850% ( 45) 00:09:32.725 14115.446 - 14216.271: 60.7449% ( 71) 00:09:32.725 14216.271 - 14317.095: 61.7080% ( 90) 00:09:32.725 14317.095 - 14417.920: 62.6070% ( 84) 00:09:32.725 14417.920 - 14518.745: 63.8592% ( 117) 00:09:32.725 14518.745 - 14619.569: 64.9294% ( 100) 00:09:32.725 14619.569 - 14720.394: 66.0959% ( 109) 00:09:32.725 14720.394 - 14821.218: 67.2838% ( 111) 00:09:32.725 14821.218 - 14922.043: 68.5467% ( 118) 00:09:32.725 14922.043 - 15022.868: 69.6062% ( 99) 00:09:32.725 15022.868 - 15123.692: 70.2804% ( 63) 00:09:32.725 15123.692 - 15224.517: 71.0509% ( 72) 00:09:32.725 15224.517 - 15325.342: 72.0783% ( 96) 00:09:32.725 15325.342 - 15426.166: 73.0522% ( 91) 00:09:32.725 15426.166 - 15526.991: 74.1652% ( 104) 00:09:32.725 15526.991 - 15627.815: 75.3425% ( 110) 00:09:32.725 15627.815 - 15728.640: 76.3806% ( 97) 00:09:32.725 15728.640 - 15829.465: 77.6648% ( 120) 00:09:32.725 15829.465 - 15930.289: 79.0026% ( 125) 00:09:32.725 15930.289 - 16031.114: 80.0407% ( 97) 00:09:32.725 16031.114 - 16131.938: 81.2714% ( 115) 00:09:32.725 16131.938 - 16232.763: 82.4593% ( 111) 00:09:32.725 16232.763 - 16333.588: 83.6580% ( 112) 00:09:32.725 16333.588 - 16434.412: 84.8031% ( 107) 00:09:32.725 16434.412 - 16535.237: 85.8198% ( 95) 00:09:32.725 16535.237 - 16636.062: 86.8579% ( 97) 00:09:32.725 16636.062 - 16736.886: 87.6819% ( 77) 00:09:32.725 16736.886 - 16837.711: 88.3990% ( 67) 00:09:32.725 16837.711 - 16938.535: 89.0625% ( 62) 00:09:32.726 16938.535 - 17039.360: 89.8652% ( 75) 00:09:32.726 17039.360 - 17140.185: 90.3896% ( 49) 00:09:32.726 17140.185 - 17241.009: 90.7641% ( 35) 00:09:32.726 17241.009 - 17341.834: 90.9996% ( 22) 00:09:32.726 17341.834 - 17442.658: 91.2136% ( 20) 00:09:32.726 17442.658 - 17543.483: 91.4812% ( 25) 00:09:32.726 17543.483 - 17644.308: 91.7594% ( 26) 00:09:32.726 17644.308 - 17745.132: 92.0163% ( 24) 00:09:32.726 17745.132 - 17845.957: 92.3480% ( 31) 00:09:32.726 17845.957 - 17946.782: 92.7440% ( 37) 00:09:32.726 17946.782 - 18047.606: 93.1507% ( 38) 00:09:32.726 18047.606 - 18148.431: 93.4075% ( 24) 00:09:32.726 18148.431 - 18249.255: 93.6430% ( 22) 00:09:32.726 18249.255 - 18350.080: 93.8035% ( 15) 00:09:32.726 18350.080 - 18450.905: 93.9747% ( 16) 00:09:32.726 18450.905 - 18551.729: 94.1567% ( 17) 00:09:32.726 18551.729 - 18652.554: 94.4242% ( 25) 00:09:32.726 18652.554 - 18753.378: 94.7132% ( 27) 00:09:32.726 18753.378 - 18854.203: 94.9807% ( 25) 00:09:32.726 18854.203 - 18955.028: 95.2376% ( 24) 00:09:32.726 18955.028 - 19055.852: 95.4195% ( 17) 00:09:32.726 19055.852 - 19156.677: 95.6122% ( 18) 00:09:32.726 19156.677 - 19257.502: 95.7941% ( 17) 00:09:32.726 19257.502 - 19358.326: 96.0509% ( 24) 00:09:32.726 19358.326 - 19459.151: 96.2436% ( 18) 00:09:32.726 19459.151 - 19559.975: 96.4148% ( 16) 00:09:32.726 19559.975 - 19660.800: 96.5860% ( 16) 00:09:32.726 19660.800 - 19761.625: 96.7680% ( 17) 00:09:32.726 19761.625 - 19862.449: 96.8750% ( 10) 00:09:32.726 19862.449 - 19963.274: 96.9820% ( 10) 00:09:32.726 19963.274 - 20064.098: 97.0569% ( 7) 00:09:32.726 20064.098 - 20164.923: 97.1211% ( 6) 00:09:32.726 20164.923 - 20265.748: 97.2389% ( 11) 00:09:32.726 20265.748 - 20366.572: 97.3887% ( 14) 00:09:32.726 20366.572 - 20467.397: 97.5599% ( 16) 00:09:32.726 20467.397 - 20568.222: 97.6027% ( 4) 00:09:32.726 20568.222 - 20669.046: 97.6670% ( 6) 00:09:32.726 20669.046 - 20769.871: 97.7205% ( 5) 00:09:32.726 20769.871 - 20870.695: 97.7740% ( 5) 00:09:32.726 20870.695 - 20971.520: 97.8917% ( 11) 00:09:32.726 20971.520 - 21072.345: 98.0522% ( 15) 00:09:32.726 21072.345 - 21173.169: 98.2877% ( 22) 00:09:32.726 21173.169 - 21273.994: 98.4482% ( 15) 00:09:32.726 21273.994 - 21374.818: 98.5445% ( 9) 00:09:32.726 21374.818 - 21475.643: 98.6087% ( 6) 00:09:32.726 21475.643 - 21576.468: 98.6301% ( 2) 00:09:32.726 27625.945 - 27827.594: 98.7693% ( 13) 00:09:32.726 27827.594 - 28029.243: 98.8763% ( 10) 00:09:32.726 28029.243 - 28230.892: 99.0047% ( 12) 00:09:32.726 28230.892 - 28432.542: 99.0689% ( 6) 00:09:32.726 28432.542 - 28634.191: 99.1117% ( 4) 00:09:32.726 28634.191 - 28835.840: 99.1759% ( 6) 00:09:32.726 28835.840 - 29037.489: 99.2188% ( 4) 00:09:32.726 29037.489 - 29239.138: 99.2830% ( 6) 00:09:32.726 29239.138 - 29440.788: 99.3151% ( 3) 00:09:32.726 36095.212 - 36296.862: 99.3686% ( 5) 00:09:32.726 36296.862 - 36498.511: 99.4435% ( 7) 00:09:32.726 36498.511 - 36700.160: 99.5184% ( 7) 00:09:32.726 36700.160 - 36901.809: 99.5933% ( 7) 00:09:32.726 36901.809 - 37103.458: 99.6575% ( 6) 00:09:32.726 37103.458 - 37305.108: 99.7324% ( 7) 00:09:32.726 37305.108 - 37506.757: 99.7967% ( 6) 00:09:32.726 37506.757 - 37708.406: 99.8609% ( 6) 00:09:32.726 37708.406 - 37910.055: 99.9251% ( 6) 00:09:32.726 37910.055 - 38111.705: 99.9893% ( 6) 00:09:32.726 38111.705 - 38313.354: 100.0000% ( 1) 00:09:32.726 00:09:32.726 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:32.726 ============================================================================== 00:09:32.726 Range in us Cumulative IO count 00:09:32.726 9074.215 - 9124.628: 0.0428% ( 4) 00:09:32.726 9124.628 - 9175.040: 0.0963% ( 5) 00:09:32.726 9175.040 - 9225.452: 0.1498% ( 5) 00:09:32.726 9225.452 - 9275.865: 0.2033% ( 5) 00:09:32.726 9275.865 - 9326.277: 0.2890% ( 8) 00:09:32.726 9326.277 - 9376.689: 0.5886% ( 28) 00:09:32.726 9376.689 - 9427.102: 0.8776% ( 27) 00:09:32.726 9427.102 - 9477.514: 1.2307% ( 33) 00:09:32.726 9477.514 - 9527.926: 1.5839% ( 33) 00:09:32.726 9527.926 - 9578.338: 1.9157% ( 31) 00:09:32.726 9578.338 - 9628.751: 2.2688% ( 33) 00:09:32.726 9628.751 - 9679.163: 2.6969% ( 40) 00:09:32.726 9679.163 - 9729.575: 3.3604% ( 62) 00:09:32.726 9729.575 - 9779.988: 3.8206% ( 43) 00:09:32.726 9779.988 - 9830.400: 4.1524% ( 31) 00:09:32.726 9830.400 - 9880.812: 4.7303% ( 54) 00:09:32.726 9880.812 - 9931.225: 5.3082% ( 54) 00:09:32.726 9931.225 - 9981.637: 5.8540% ( 51) 00:09:32.726 9981.637 - 10032.049: 6.6032% ( 70) 00:09:32.726 10032.049 - 10082.462: 7.0955% ( 46) 00:09:32.726 10082.462 - 10132.874: 7.8981% ( 75) 00:09:32.726 10132.874 - 10183.286: 8.3369% ( 41) 00:09:32.726 10183.286 - 10233.698: 8.7329% ( 37) 00:09:32.726 10233.698 - 10284.111: 9.1503% ( 39) 00:09:32.726 10284.111 - 10334.523: 9.5462% ( 37) 00:09:32.726 10334.523 - 10384.935: 9.8887% ( 32) 00:09:32.726 10384.935 - 10435.348: 10.2740% ( 36) 00:09:32.726 10435.348 - 10485.760: 10.7770% ( 47) 00:09:32.726 10485.760 - 10536.172: 11.2372% ( 43) 00:09:32.726 10536.172 - 10586.585: 11.7080% ( 44) 00:09:32.726 10586.585 - 10636.997: 12.0398% ( 31) 00:09:32.726 10636.997 - 10687.409: 12.3502% ( 29) 00:09:32.726 10687.409 - 10737.822: 12.6926% ( 32) 00:09:32.726 10737.822 - 10788.234: 13.0351% ( 32) 00:09:32.726 10788.234 - 10838.646: 13.4632% ( 40) 00:09:32.726 10838.646 - 10889.058: 13.8164% ( 33) 00:09:32.726 10889.058 - 10939.471: 14.3515% ( 50) 00:09:32.726 10939.471 - 10989.883: 14.9401% ( 55) 00:09:32.726 10989.883 - 11040.295: 15.4966% ( 52) 00:09:32.726 11040.295 - 11090.708: 15.9782% ( 45) 00:09:32.726 11090.708 - 11141.120: 16.7273% ( 70) 00:09:32.726 11141.120 - 11191.532: 17.5728% ( 79) 00:09:32.726 11191.532 - 11241.945: 18.3326% ( 71) 00:09:32.726 11241.945 - 11292.357: 19.1353% ( 75) 00:09:32.726 11292.357 - 11342.769: 19.9272% ( 74) 00:09:32.726 11342.769 - 11393.182: 20.8797% ( 89) 00:09:32.726 11393.182 - 11443.594: 21.7359% ( 80) 00:09:32.726 11443.594 - 11494.006: 22.9238% ( 111) 00:09:32.726 11494.006 - 11544.418: 23.7479% ( 77) 00:09:32.726 11544.418 - 11594.831: 24.5398% ( 74) 00:09:32.726 11594.831 - 11645.243: 25.3104% ( 72) 00:09:32.726 11645.243 - 11695.655: 26.1237% ( 76) 00:09:32.726 11695.655 - 11746.068: 27.1083% ( 92) 00:09:32.726 11746.068 - 11796.480: 27.9966% ( 83) 00:09:32.726 11796.480 - 11846.892: 29.2594% ( 118) 00:09:32.726 11846.892 - 11897.305: 30.4045% ( 107) 00:09:32.726 11897.305 - 11947.717: 31.5604% ( 108) 00:09:32.726 11947.717 - 11998.129: 32.9088% ( 126) 00:09:32.726 11998.129 - 12048.542: 33.9897% ( 101) 00:09:32.726 12048.542 - 12098.954: 35.2205% ( 115) 00:09:32.726 12098.954 - 12149.366: 36.5261% ( 122) 00:09:32.726 12149.366 - 12199.778: 37.5856% ( 99) 00:09:32.726 12199.778 - 12250.191: 38.6772% ( 102) 00:09:32.726 12250.191 - 12300.603: 39.7046% ( 96) 00:09:32.726 12300.603 - 12351.015: 40.5073% ( 75) 00:09:32.726 12351.015 - 12401.428: 41.3634% ( 80) 00:09:32.726 12401.428 - 12451.840: 42.2624% ( 84) 00:09:32.726 12451.840 - 12502.252: 43.1614% ( 84) 00:09:32.726 12502.252 - 12552.665: 44.5205% ( 127) 00:09:32.726 12552.665 - 12603.077: 45.4730% ( 89) 00:09:32.726 12603.077 - 12653.489: 46.4148% ( 88) 00:09:32.726 12653.489 - 12703.902: 47.1426% ( 68) 00:09:32.726 12703.902 - 12754.314: 47.6670% ( 49) 00:09:32.726 12754.314 - 12804.726: 48.2449% ( 54) 00:09:32.726 12804.726 - 12855.138: 49.0047% ( 71) 00:09:32.726 12855.138 - 12905.551: 49.6896% ( 64) 00:09:32.726 12905.551 - 13006.375: 50.9632% ( 119) 00:09:32.726 13006.375 - 13107.200: 52.3545% ( 130) 00:09:32.726 13107.200 - 13208.025: 53.7778% ( 133) 00:09:32.726 13208.025 - 13308.849: 55.2975% ( 142) 00:09:32.726 13308.849 - 13409.674: 56.3677% ( 100) 00:09:32.726 13409.674 - 13510.498: 57.3309% ( 90) 00:09:32.726 13510.498 - 13611.323: 58.1550% ( 77) 00:09:32.726 13611.323 - 13712.148: 58.6794% ( 49) 00:09:32.726 13712.148 - 13812.972: 59.3964% ( 67) 00:09:32.726 13812.972 - 13913.797: 60.1991% ( 75) 00:09:32.726 13913.797 - 14014.622: 61.2051% ( 94) 00:09:32.726 14014.622 - 14115.446: 62.0933% ( 83) 00:09:32.726 14115.446 - 14216.271: 63.0030% ( 85) 00:09:32.726 14216.271 - 14317.095: 63.8271% ( 77) 00:09:32.726 14317.095 - 14417.920: 64.5120% ( 64) 00:09:32.726 14417.920 - 14518.745: 65.3574% ( 79) 00:09:32.726 14518.745 - 14619.569: 66.3634% ( 94) 00:09:32.726 14619.569 - 14720.394: 67.4765% ( 104) 00:09:32.726 14720.394 - 14821.218: 68.5146% ( 97) 00:09:32.726 14821.218 - 14922.043: 69.2851% ( 72) 00:09:32.726 14922.043 - 15022.868: 70.1948% ( 85) 00:09:32.726 15022.868 - 15123.692: 71.2864% ( 102) 00:09:32.726 15123.692 - 15224.517: 72.4743% ( 111) 00:09:32.726 15224.517 - 15325.342: 73.5766% ( 103) 00:09:32.726 15325.342 - 15426.166: 74.6361% ( 99) 00:09:32.726 15426.166 - 15526.991: 75.7598% ( 105) 00:09:32.726 15526.991 - 15627.815: 76.9264% ( 109) 00:09:32.726 15627.815 - 15728.640: 77.6648% ( 69) 00:09:32.726 15728.640 - 15829.465: 78.6494% ( 92) 00:09:32.726 15829.465 - 15930.289: 79.6768% ( 96) 00:09:32.726 15930.289 - 16031.114: 80.7042% ( 96) 00:09:32.726 16031.114 - 16131.938: 81.7316% ( 96) 00:09:32.726 16131.938 - 16232.763: 82.6306% ( 84) 00:09:32.726 16232.763 - 16333.588: 83.3690% ( 69) 00:09:32.726 16333.588 - 16434.412: 84.1396% ( 72) 00:09:32.726 16434.412 - 16535.237: 84.9208% ( 73) 00:09:32.726 16535.237 - 16636.062: 85.7770% ( 80) 00:09:32.726 16636.062 - 16736.886: 86.5796% ( 75) 00:09:32.726 16736.886 - 16837.711: 87.2110% ( 59) 00:09:32.726 16837.711 - 16938.535: 87.8746% ( 62) 00:09:32.726 16938.535 - 17039.360: 88.4525% ( 54) 00:09:32.726 17039.360 - 17140.185: 89.0197% ( 53) 00:09:32.726 17140.185 - 17241.009: 89.6404% ( 58) 00:09:32.726 17241.009 - 17341.834: 90.1220% ( 45) 00:09:32.727 17341.834 - 17442.658: 90.5822% ( 43) 00:09:32.727 17442.658 - 17543.483: 90.9354% ( 33) 00:09:32.727 17543.483 - 17644.308: 91.3527% ( 39) 00:09:32.727 17644.308 - 17745.132: 91.7380% ( 36) 00:09:32.727 17745.132 - 17845.957: 92.0912% ( 33) 00:09:32.727 17845.957 - 17946.782: 92.4015% ( 29) 00:09:32.727 17946.782 - 18047.606: 92.6691% ( 25) 00:09:32.727 18047.606 - 18148.431: 92.9473% ( 26) 00:09:32.727 18148.431 - 18249.255: 93.1186% ( 16) 00:09:32.727 18249.255 - 18350.080: 93.3968% ( 26) 00:09:32.727 18350.080 - 18450.905: 93.5788% ( 17) 00:09:32.727 18450.905 - 18551.729: 93.8570% ( 26) 00:09:32.727 18551.729 - 18652.554: 94.2102% ( 33) 00:09:32.727 18652.554 - 18753.378: 94.5634% ( 33) 00:09:32.727 18753.378 - 18854.203: 94.8630% ( 28) 00:09:32.727 18854.203 - 18955.028: 95.1306% ( 25) 00:09:32.727 18955.028 - 19055.852: 95.3553% ( 21) 00:09:32.727 19055.852 - 19156.677: 95.5908% ( 22) 00:09:32.727 19156.677 - 19257.502: 95.8476% ( 24) 00:09:32.727 19257.502 - 19358.326: 96.1366% ( 27) 00:09:32.727 19358.326 - 19459.151: 96.3613% ( 21) 00:09:32.727 19459.151 - 19559.975: 96.5967% ( 22) 00:09:32.727 19559.975 - 19660.800: 96.7680% ( 16) 00:09:32.727 19660.800 - 19761.625: 96.8964% ( 12) 00:09:32.727 19761.625 - 19862.449: 97.0890% ( 18) 00:09:32.727 19862.449 - 19963.274: 97.2603% ( 16) 00:09:32.727 19963.274 - 20064.098: 97.3352% ( 7) 00:09:32.727 20064.098 - 20164.923: 97.4743% ( 13) 00:09:32.727 20164.923 - 20265.748: 97.5706% ( 9) 00:09:32.727 20265.748 - 20366.572: 97.6670% ( 9) 00:09:32.727 20366.572 - 20467.397: 97.7847% ( 11) 00:09:32.727 20467.397 - 20568.222: 97.9024% ( 11) 00:09:32.727 20568.222 - 20669.046: 97.9987% ( 9) 00:09:32.727 20669.046 - 20769.871: 98.1057% ( 10) 00:09:32.727 20769.871 - 20870.695: 98.2128% ( 10) 00:09:32.727 20870.695 - 20971.520: 98.2663% ( 5) 00:09:32.727 20971.520 - 21072.345: 98.2877% ( 2) 00:09:32.727 21072.345 - 21173.169: 98.3198% ( 3) 00:09:32.727 21173.169 - 21273.994: 98.3626% ( 4) 00:09:32.727 21273.994 - 21374.818: 98.3840% ( 2) 00:09:32.727 21374.818 - 21475.643: 98.4054% ( 2) 00:09:32.727 21475.643 - 21576.468: 98.4375% ( 3) 00:09:32.727 21576.468 - 21677.292: 98.4589% ( 2) 00:09:32.727 21677.292 - 21778.117: 98.4803% ( 2) 00:09:32.727 21778.117 - 21878.942: 98.5338% ( 5) 00:09:32.727 21878.942 - 21979.766: 98.5659% ( 3) 00:09:32.727 21979.766 - 22080.591: 98.5873% ( 2) 00:09:32.727 22080.591 - 22181.415: 98.6087% ( 2) 00:09:32.727 22181.415 - 22282.240: 98.6301% ( 2) 00:09:32.727 26617.698 - 26819.348: 98.7051% ( 7) 00:09:32.727 26819.348 - 27020.997: 98.7586% ( 5) 00:09:32.727 27020.997 - 27222.646: 98.8121% ( 5) 00:09:32.727 27222.646 - 27424.295: 98.8549% ( 4) 00:09:32.727 27424.295 - 27625.945: 98.9084% ( 5) 00:09:32.727 27625.945 - 27827.594: 98.9726% ( 6) 00:09:32.727 27827.594 - 28029.243: 99.0368% ( 6) 00:09:32.727 28029.243 - 28230.892: 99.0903% ( 5) 00:09:32.727 28230.892 - 28432.542: 99.1652% ( 7) 00:09:32.727 28432.542 - 28634.191: 99.2080% ( 4) 00:09:32.727 28634.191 - 28835.840: 99.2830% ( 7) 00:09:32.727 28835.840 - 29037.489: 99.3151% ( 3) 00:09:32.727 34683.668 - 34885.317: 99.3579% ( 4) 00:09:32.727 34885.317 - 35086.966: 99.4221% ( 6) 00:09:32.727 35086.966 - 35288.615: 99.4756% ( 5) 00:09:32.727 35288.615 - 35490.265: 99.5398% ( 6) 00:09:32.727 35490.265 - 35691.914: 99.5826% ( 4) 00:09:32.727 35691.914 - 35893.563: 99.6361% ( 5) 00:09:32.727 35893.563 - 36095.212: 99.7110% ( 7) 00:09:32.727 36095.212 - 36296.862: 99.7860% ( 7) 00:09:32.727 36296.862 - 36498.511: 99.8609% ( 7) 00:09:32.727 36498.511 - 36700.160: 99.9465% ( 8) 00:09:32.727 36700.160 - 36901.809: 100.0000% ( 5) 00:09:32.727 00:09:32.727 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:32.727 ============================================================================== 00:09:32.727 Range in us Cumulative IO count 00:09:32.727 9175.040 - 9225.452: 0.0214% ( 2) 00:09:32.727 9326.277 - 9376.689: 0.0428% ( 2) 00:09:32.727 9376.689 - 9427.102: 0.0749% ( 3) 00:09:32.727 9427.102 - 9477.514: 0.1177% ( 4) 00:09:32.727 9477.514 - 9527.926: 0.2140% ( 9) 00:09:32.727 9527.926 - 9578.338: 0.5137% ( 28) 00:09:32.727 9578.338 - 9628.751: 0.8562% ( 32) 00:09:32.727 9628.751 - 9679.163: 1.3164% ( 43) 00:09:32.727 9679.163 - 9729.575: 1.9157% ( 56) 00:09:32.727 9729.575 - 9779.988: 2.6541% ( 69) 00:09:32.727 9779.988 - 9830.400: 3.3711% ( 67) 00:09:32.727 9830.400 - 9880.812: 4.0668% ( 65) 00:09:32.727 9880.812 - 9931.225: 4.7089% ( 60) 00:09:32.727 9931.225 - 9981.637: 5.5116% ( 75) 00:09:32.727 9981.637 - 10032.049: 6.1858% ( 63) 00:09:32.727 10032.049 - 10082.462: 6.9242% ( 69) 00:09:32.727 10082.462 - 10132.874: 7.6306% ( 66) 00:09:32.727 10132.874 - 10183.286: 8.3155% ( 64) 00:09:32.727 10183.286 - 10233.698: 8.9041% ( 55) 00:09:32.727 10233.698 - 10284.111: 9.4606% ( 52) 00:09:32.727 10284.111 - 10334.523: 9.9529% ( 46) 00:09:32.727 10334.523 - 10384.935: 10.4024% ( 42) 00:09:32.727 10384.935 - 10435.348: 10.8198% ( 39) 00:09:32.727 10435.348 - 10485.760: 11.2800% ( 43) 00:09:32.727 10485.760 - 10536.172: 11.6545% ( 35) 00:09:32.727 10536.172 - 10586.585: 11.9756% ( 30) 00:09:32.727 10586.585 - 10636.997: 12.2753% ( 28) 00:09:32.727 10636.997 - 10687.409: 12.6819% ( 38) 00:09:32.727 10687.409 - 10737.822: 13.0030% ( 30) 00:09:32.727 10737.822 - 10788.234: 13.3134% ( 29) 00:09:32.727 10788.234 - 10838.646: 13.6772% ( 34) 00:09:32.727 10838.646 - 10889.058: 14.0518% ( 35) 00:09:32.727 10889.058 - 10939.471: 14.5441% ( 46) 00:09:32.727 10939.471 - 10989.883: 14.9936% ( 42) 00:09:32.727 10989.883 - 11040.295: 15.5073% ( 48) 00:09:32.727 11040.295 - 11090.708: 16.1173% ( 57) 00:09:32.727 11090.708 - 11141.120: 16.9735% ( 80) 00:09:32.727 11141.120 - 11191.532: 17.6263% ( 61) 00:09:32.727 11191.532 - 11241.945: 18.4717% ( 79) 00:09:32.727 11241.945 - 11292.357: 19.0711% ( 56) 00:09:32.727 11292.357 - 11342.769: 19.9058% ( 78) 00:09:32.727 11342.769 - 11393.182: 20.6015% ( 65) 00:09:32.727 11393.182 - 11443.594: 21.1152% ( 48) 00:09:32.727 11443.594 - 11494.006: 21.8429% ( 68) 00:09:32.727 11494.006 - 11544.418: 22.6991% ( 80) 00:09:32.727 11544.418 - 11594.831: 23.6515% ( 89) 00:09:32.727 11594.831 - 11645.243: 24.6896% ( 97) 00:09:32.727 11645.243 - 11695.655: 26.1023% ( 132) 00:09:32.727 11695.655 - 11746.068: 27.6434% ( 144) 00:09:32.727 11746.068 - 11796.480: 29.2915% ( 154) 00:09:32.727 11796.480 - 11846.892: 30.8433% ( 145) 00:09:32.727 11846.892 - 11897.305: 31.9884% ( 107) 00:09:32.727 11897.305 - 11947.717: 33.5188% ( 143) 00:09:32.727 11947.717 - 11998.129: 34.8031% ( 120) 00:09:32.727 11998.129 - 12048.542: 36.2479% ( 135) 00:09:32.727 12048.542 - 12098.954: 37.2003% ( 89) 00:09:32.727 12098.954 - 12149.366: 38.2705% ( 100) 00:09:32.727 12149.366 - 12199.778: 39.2016% ( 87) 00:09:32.727 12199.778 - 12250.191: 40.1862% ( 92) 00:09:32.727 12250.191 - 12300.603: 41.2029% ( 95) 00:09:32.727 12300.603 - 12351.015: 42.1019% ( 84) 00:09:32.727 12351.015 - 12401.428: 43.0651% ( 90) 00:09:32.727 12401.428 - 12451.840: 43.9640% ( 84) 00:09:32.727 12451.840 - 12502.252: 44.8951% ( 87) 00:09:32.727 12502.252 - 12552.665: 45.7941% ( 84) 00:09:32.727 12552.665 - 12603.077: 46.5753% ( 73) 00:09:32.727 12603.077 - 12653.489: 47.6027% ( 96) 00:09:32.727 12653.489 - 12703.902: 48.5338% ( 87) 00:09:32.727 12703.902 - 12754.314: 49.5505% ( 95) 00:09:32.727 12754.314 - 12804.726: 50.3425% ( 74) 00:09:32.727 12804.726 - 12855.138: 51.1879% ( 79) 00:09:32.727 12855.138 - 12905.551: 51.6695% ( 45) 00:09:32.727 12905.551 - 13006.375: 52.6862% ( 95) 00:09:32.727 13006.375 - 13107.200: 53.5638% ( 82) 00:09:32.727 13107.200 - 13208.025: 54.1738% ( 57) 00:09:32.727 13208.025 - 13308.849: 54.7196% ( 51) 00:09:32.727 13308.849 - 13409.674: 55.4366% ( 67) 00:09:32.727 13409.674 - 13510.498: 56.2179% ( 73) 00:09:32.727 13510.498 - 13611.323: 56.7530% ( 50) 00:09:32.727 13611.323 - 13712.148: 57.2774% ( 49) 00:09:32.727 13712.148 - 13812.972: 57.8125% ( 50) 00:09:32.727 13812.972 - 13913.797: 58.4867% ( 63) 00:09:32.727 13913.797 - 14014.622: 59.2359% ( 70) 00:09:32.727 14014.622 - 14115.446: 59.9636% ( 68) 00:09:32.727 14115.446 - 14216.271: 60.8626% ( 84) 00:09:32.727 14216.271 - 14317.095: 61.7188% ( 80) 00:09:32.727 14317.095 - 14417.920: 62.6926% ( 91) 00:09:32.727 14417.920 - 14518.745: 63.9127% ( 114) 00:09:32.727 14518.745 - 14619.569: 65.4003% ( 139) 00:09:32.727 14619.569 - 14720.394: 66.5882% ( 111) 00:09:32.727 14720.394 - 14821.218: 67.9045% ( 123) 00:09:32.727 14821.218 - 14922.043: 69.4135% ( 141) 00:09:32.727 14922.043 - 15022.868: 70.5372% ( 105) 00:09:32.727 15022.868 - 15123.692: 71.6931% ( 108) 00:09:32.727 15123.692 - 15224.517: 72.8275% ( 106) 00:09:32.727 15224.517 - 15325.342: 74.2295% ( 131) 00:09:32.727 15325.342 - 15426.166: 75.3318% ( 103) 00:09:32.727 15426.166 - 15526.991: 76.4555% ( 105) 00:09:32.727 15526.991 - 15627.815: 77.9003% ( 135) 00:09:32.727 15627.815 - 15728.640: 79.2915% ( 130) 00:09:32.727 15728.640 - 15829.465: 80.4259% ( 106) 00:09:32.727 15829.465 - 15930.289: 81.3677% ( 88) 00:09:32.727 15930.289 - 16031.114: 82.0741% ( 66) 00:09:32.727 16031.114 - 16131.938: 82.7483% ( 63) 00:09:32.727 16131.938 - 16232.763: 83.4011% ( 61) 00:09:32.727 16232.763 - 16333.588: 83.9897% ( 55) 00:09:32.727 16333.588 - 16434.412: 84.6211% ( 59) 00:09:32.727 16434.412 - 16535.237: 85.2419% ( 58) 00:09:32.727 16535.237 - 16636.062: 85.9589% ( 67) 00:09:32.727 16636.062 - 16736.886: 86.7295% ( 72) 00:09:32.727 16736.886 - 16837.711: 87.5321% ( 75) 00:09:32.727 16837.711 - 16938.535: 88.0993% ( 53) 00:09:32.727 16938.535 - 17039.360: 88.5916% ( 46) 00:09:32.727 17039.360 - 17140.185: 88.9662% ( 35) 00:09:32.728 17140.185 - 17241.009: 89.3086% ( 32) 00:09:32.728 17241.009 - 17341.834: 89.5976% ( 27) 00:09:32.728 17341.834 - 17442.658: 90.0150% ( 39) 00:09:32.728 17442.658 - 17543.483: 90.4859% ( 44) 00:09:32.728 17543.483 - 17644.308: 91.0210% ( 50) 00:09:32.728 17644.308 - 17745.132: 91.6310% ( 57) 00:09:32.728 17745.132 - 17845.957: 92.0270% ( 37) 00:09:32.728 17845.957 - 17946.782: 92.3587% ( 31) 00:09:32.728 17946.782 - 18047.606: 92.6798% ( 30) 00:09:32.728 18047.606 - 18148.431: 92.9902% ( 29) 00:09:32.728 18148.431 - 18249.255: 93.2256% ( 22) 00:09:32.728 18249.255 - 18350.080: 93.4503% ( 21) 00:09:32.728 18350.080 - 18450.905: 93.6216% ( 16) 00:09:32.728 18450.905 - 18551.729: 93.7393% ( 11) 00:09:32.728 18551.729 - 18652.554: 93.8998% ( 15) 00:09:32.728 18652.554 - 18753.378: 94.1353% ( 22) 00:09:32.728 18753.378 - 18854.203: 94.4456% ( 29) 00:09:32.728 18854.203 - 18955.028: 94.7239% ( 26) 00:09:32.728 18955.028 - 19055.852: 95.1734% ( 42) 00:09:32.728 19055.852 - 19156.677: 95.4195% ( 23) 00:09:32.728 19156.677 - 19257.502: 95.6764% ( 24) 00:09:32.728 19257.502 - 19358.326: 95.9760% ( 28) 00:09:32.728 19358.326 - 19459.151: 96.1794% ( 19) 00:09:32.728 19459.151 - 19559.975: 96.3506% ( 16) 00:09:32.728 19559.975 - 19660.800: 96.5432% ( 18) 00:09:32.728 19660.800 - 19761.625: 96.7359% ( 18) 00:09:32.728 19761.625 - 19862.449: 96.8750% ( 13) 00:09:32.728 19862.449 - 19963.274: 97.0248% ( 14) 00:09:32.728 19963.274 - 20064.098: 97.1854% ( 15) 00:09:32.728 20064.098 - 20164.923: 97.3138% ( 12) 00:09:32.728 20164.923 - 20265.748: 97.4636% ( 14) 00:09:32.728 20265.748 - 20366.572: 97.5813% ( 11) 00:09:32.728 20366.572 - 20467.397: 97.6348% ( 5) 00:09:32.728 20467.397 - 20568.222: 97.6991% ( 6) 00:09:32.728 20568.222 - 20669.046: 97.7740% ( 7) 00:09:32.728 20669.046 - 20769.871: 97.9238% ( 14) 00:09:32.728 20769.871 - 20870.695: 98.0308% ( 10) 00:09:32.728 20870.695 - 20971.520: 98.1378% ( 10) 00:09:32.728 20971.520 - 21072.345: 98.2021% ( 6) 00:09:32.728 21072.345 - 21173.169: 98.2449% ( 4) 00:09:32.728 21173.169 - 21273.994: 98.2877% ( 4) 00:09:32.728 21273.994 - 21374.818: 98.3305% ( 4) 00:09:32.728 21374.818 - 21475.643: 98.3840% ( 5) 00:09:32.728 21475.643 - 21576.468: 98.4268% ( 4) 00:09:32.728 21576.468 - 21677.292: 98.4803% ( 5) 00:09:32.728 21677.292 - 21778.117: 98.5231% ( 4) 00:09:32.728 21778.117 - 21878.942: 98.5766% ( 5) 00:09:32.728 21878.942 - 21979.766: 98.6301% ( 5) 00:09:32.728 25306.978 - 25407.803: 98.6515% ( 2) 00:09:32.728 25407.803 - 25508.628: 98.6836% ( 3) 00:09:32.728 25508.628 - 25609.452: 98.7158% ( 3) 00:09:32.728 25609.452 - 25710.277: 98.7479% ( 3) 00:09:32.728 25710.277 - 25811.102: 98.7907% ( 4) 00:09:32.728 25811.102 - 26012.751: 98.8442% ( 5) 00:09:32.728 26012.751 - 26214.400: 98.9084% ( 6) 00:09:32.728 26214.400 - 26416.049: 98.9726% ( 6) 00:09:32.728 26416.049 - 26617.698: 99.0368% ( 6) 00:09:32.728 26617.698 - 26819.348: 99.1010% ( 6) 00:09:32.728 26819.348 - 27020.997: 99.1652% ( 6) 00:09:32.728 27020.997 - 27222.646: 99.2295% ( 6) 00:09:32.728 27222.646 - 27424.295: 99.3044% ( 7) 00:09:32.728 27424.295 - 27625.945: 99.3151% ( 1) 00:09:32.728 33070.474 - 33272.123: 99.3472% ( 3) 00:09:32.728 33272.123 - 33473.772: 99.4328% ( 8) 00:09:32.728 33473.772 - 33675.422: 99.4970% ( 6) 00:09:32.728 33675.422 - 33877.071: 99.5826% ( 8) 00:09:32.728 33877.071 - 34078.720: 99.6682% ( 8) 00:09:32.728 34078.720 - 34280.369: 99.7539% ( 8) 00:09:32.728 34280.369 - 34482.018: 99.8395% ( 8) 00:09:32.728 34482.018 - 34683.668: 99.9251% ( 8) 00:09:32.728 34683.668 - 34885.317: 100.0000% ( 7) 00:09:32.728 00:09:32.728 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:32.728 ============================================================================== 00:09:32.728 Range in us Cumulative IO count 00:09:32.728 8519.680 - 8570.092: 0.0107% ( 1) 00:09:32.728 8570.092 - 8620.505: 0.0535% ( 4) 00:09:32.728 8620.505 - 8670.917: 0.0963% ( 4) 00:09:32.728 8670.917 - 8721.329: 0.1605% ( 6) 00:09:32.728 8721.329 - 8771.742: 0.2140% ( 5) 00:09:32.728 8771.742 - 8822.154: 0.2676% ( 5) 00:09:32.728 8822.154 - 8872.566: 0.3104% ( 4) 00:09:32.728 8872.566 - 8922.978: 0.4495% ( 13) 00:09:32.728 8922.978 - 8973.391: 0.4923% ( 4) 00:09:32.728 8973.391 - 9023.803: 0.5458% ( 5) 00:09:32.728 9023.803 - 9074.215: 0.5886% ( 4) 00:09:32.728 9074.215 - 9124.628: 0.6421% ( 5) 00:09:32.728 9124.628 - 9175.040: 0.6849% ( 4) 00:09:32.728 9225.452 - 9275.865: 0.6956% ( 1) 00:09:32.728 9275.865 - 9326.277: 0.7277% ( 3) 00:09:32.728 9326.277 - 9376.689: 0.7705% ( 4) 00:09:32.728 9376.689 - 9427.102: 0.8241% ( 5) 00:09:32.728 9427.102 - 9477.514: 0.8776% ( 5) 00:09:32.728 9477.514 - 9527.926: 1.0060% ( 12) 00:09:32.728 9527.926 - 9578.338: 1.1879% ( 17) 00:09:32.728 9578.338 - 9628.751: 1.5839% ( 37) 00:09:32.728 9628.751 - 9679.163: 1.9799% ( 37) 00:09:32.728 9679.163 - 9729.575: 2.4615% ( 45) 00:09:32.728 9729.575 - 9779.988: 3.0287% ( 53) 00:09:32.728 9779.988 - 9830.400: 3.7885% ( 71) 00:09:32.728 9830.400 - 9880.812: 4.5484% ( 71) 00:09:32.728 9880.812 - 9931.225: 5.3296% ( 73) 00:09:32.728 9931.225 - 9981.637: 5.8005% ( 44) 00:09:32.728 9981.637 - 10032.049: 6.1751% ( 35) 00:09:32.728 10032.049 - 10082.462: 6.6139% ( 41) 00:09:32.728 10082.462 - 10132.874: 7.0955% ( 45) 00:09:32.728 10132.874 - 10183.286: 7.5342% ( 41) 00:09:32.728 10183.286 - 10233.698: 8.0051% ( 44) 00:09:32.728 10233.698 - 10284.111: 8.4760% ( 44) 00:09:32.728 10284.111 - 10334.523: 9.1074% ( 59) 00:09:32.728 10334.523 - 10384.935: 9.7282% ( 58) 00:09:32.728 10384.935 - 10435.348: 10.4773% ( 70) 00:09:32.728 10435.348 - 10485.760: 11.4726% ( 93) 00:09:32.728 10485.760 - 10536.172: 12.1789% ( 66) 00:09:32.728 10536.172 - 10586.585: 12.8318% ( 61) 00:09:32.728 10586.585 - 10636.997: 13.4953% ( 62) 00:09:32.728 10636.997 - 10687.409: 14.1053% ( 57) 00:09:32.728 10687.409 - 10737.822: 14.7795% ( 63) 00:09:32.728 10737.822 - 10788.234: 15.3253% ( 51) 00:09:32.728 10788.234 - 10838.646: 16.0103% ( 64) 00:09:32.728 10838.646 - 10889.058: 16.4705% ( 43) 00:09:32.728 10889.058 - 10939.471: 16.7915% ( 30) 00:09:32.728 10939.471 - 10989.883: 17.2945% ( 47) 00:09:32.728 10989.883 - 11040.295: 17.8189% ( 49) 00:09:32.728 11040.295 - 11090.708: 18.2256% ( 38) 00:09:32.728 11090.708 - 11141.120: 18.6323% ( 38) 00:09:32.728 11141.120 - 11191.532: 19.1032% ( 44) 00:09:32.728 11191.532 - 11241.945: 19.6276% ( 49) 00:09:32.728 11241.945 - 11292.357: 20.3660% ( 69) 00:09:32.728 11292.357 - 11342.769: 21.1259% ( 71) 00:09:32.728 11342.769 - 11393.182: 21.8643% ( 69) 00:09:32.728 11393.182 - 11443.594: 22.6027% ( 69) 00:09:32.728 11443.594 - 11494.006: 23.6194% ( 95) 00:09:32.728 11494.006 - 11544.418: 24.5398% ( 86) 00:09:32.728 11544.418 - 11594.831: 25.9097% ( 128) 00:09:32.728 11594.831 - 11645.243: 27.1939% ( 120) 00:09:32.728 11645.243 - 11695.655: 28.1357% ( 88) 00:09:32.728 11695.655 - 11746.068: 29.1845% ( 98) 00:09:32.728 11746.068 - 11796.480: 30.2654% ( 101) 00:09:32.728 11796.480 - 11846.892: 31.2500% ( 92) 00:09:32.728 11846.892 - 11897.305: 32.2988% ( 98) 00:09:32.728 11897.305 - 11947.717: 33.6152% ( 123) 00:09:32.728 11947.717 - 11998.129: 34.7603% ( 107) 00:09:32.728 11998.129 - 12048.542: 35.6378% ( 82) 00:09:32.728 12048.542 - 12098.954: 36.4298% ( 74) 00:09:32.728 12098.954 - 12149.366: 37.2753% ( 79) 00:09:32.728 12149.366 - 12199.778: 38.2491% ( 91) 00:09:32.728 12199.778 - 12250.191: 39.1160% ( 81) 00:09:32.728 12250.191 - 12300.603: 40.0792% ( 90) 00:09:32.728 12300.603 - 12351.015: 40.8497% ( 72) 00:09:32.728 12351.015 - 12401.428: 41.9735% ( 105) 00:09:32.728 12401.428 - 12451.840: 43.1400% ( 109) 00:09:32.728 12451.840 - 12502.252: 44.2209% ( 101) 00:09:32.728 12502.252 - 12552.665: 45.2697% ( 98) 00:09:32.728 12552.665 - 12603.077: 46.1794% ( 85) 00:09:32.728 12603.077 - 12653.489: 47.2282% ( 98) 00:09:32.728 12653.489 - 12703.902: 48.1699% ( 88) 00:09:32.728 12703.902 - 12754.314: 48.8656% ( 65) 00:09:32.728 12754.314 - 12804.726: 49.5719% ( 66) 00:09:32.728 12804.726 - 12855.138: 50.3425% ( 72) 00:09:32.728 12855.138 - 12905.551: 50.8990% ( 52) 00:09:32.728 12905.551 - 13006.375: 52.0227% ( 105) 00:09:32.728 13006.375 - 13107.200: 52.9859% ( 90) 00:09:32.728 13107.200 - 13208.025: 53.7029% ( 67) 00:09:32.728 13208.025 - 13308.849: 54.2808% ( 54) 00:09:32.728 13308.849 - 13409.674: 55.0728% ( 74) 00:09:32.728 13409.674 - 13510.498: 55.9182% ( 79) 00:09:32.728 13510.498 - 13611.323: 56.3998% ( 45) 00:09:32.728 13611.323 - 13712.148: 56.9349% ( 50) 00:09:32.728 13712.148 - 13812.972: 57.3630% ( 40) 00:09:32.728 13812.972 - 13913.797: 57.9195% ( 52) 00:09:32.728 13913.797 - 14014.622: 58.8506% ( 87) 00:09:32.728 14014.622 - 14115.446: 59.6961% ( 79) 00:09:32.728 14115.446 - 14216.271: 60.5308% ( 78) 00:09:32.728 14216.271 - 14317.095: 61.4298% ( 84) 00:09:32.728 14317.095 - 14417.920: 62.5535% ( 105) 00:09:32.728 14417.920 - 14518.745: 64.0732% ( 142) 00:09:32.728 14518.745 - 14619.569: 65.7320% ( 155) 00:09:32.728 14619.569 - 14720.394: 67.1019% ( 128) 00:09:32.728 14720.394 - 14821.218: 68.4932% ( 130) 00:09:32.728 14821.218 - 14922.043: 70.1734% ( 157) 00:09:32.728 14922.043 - 15022.868: 71.8322% ( 155) 00:09:32.728 15022.868 - 15123.692: 73.3519% ( 142) 00:09:32.728 15123.692 - 15224.517: 74.7003% ( 126) 00:09:32.728 15224.517 - 15325.342: 75.8562% ( 108) 00:09:32.728 15325.342 - 15426.166: 77.1190% ( 118) 00:09:32.728 15426.166 - 15526.991: 78.3390% ( 114) 00:09:32.728 15526.991 - 15627.815: 79.5912% ( 117) 00:09:32.729 15627.815 - 15728.640: 80.3296% ( 69) 00:09:32.729 15728.640 - 15829.465: 81.1430% ( 76) 00:09:32.729 15829.465 - 15930.289: 82.2881% ( 107) 00:09:32.729 15930.289 - 16031.114: 83.0801% ( 74) 00:09:32.729 16031.114 - 16131.938: 83.6580% ( 54) 00:09:32.729 16131.938 - 16232.763: 84.2038% ( 51) 00:09:32.729 16232.763 - 16333.588: 84.6211% ( 39) 00:09:32.729 16333.588 - 16434.412: 85.2419% ( 58) 00:09:32.729 16434.412 - 16535.237: 85.5950% ( 33) 00:09:32.729 16535.237 - 16636.062: 85.9803% ( 36) 00:09:32.729 16636.062 - 16736.886: 86.3870% ( 38) 00:09:32.729 16736.886 - 16837.711: 86.8365% ( 42) 00:09:32.729 16837.711 - 16938.535: 87.3288% ( 46) 00:09:32.729 16938.535 - 17039.360: 87.9388% ( 57) 00:09:32.729 17039.360 - 17140.185: 88.4311% ( 46) 00:09:32.729 17140.185 - 17241.009: 88.9769% ( 51) 00:09:32.729 17241.009 - 17341.834: 89.6832% ( 66) 00:09:32.729 17341.834 - 17442.658: 90.1327% ( 42) 00:09:32.729 17442.658 - 17543.483: 90.4752% ( 32) 00:09:32.729 17543.483 - 17644.308: 90.9461% ( 44) 00:09:32.729 17644.308 - 17745.132: 91.5775% ( 59) 00:09:32.729 17745.132 - 17845.957: 91.9414% ( 34) 00:09:32.729 17845.957 - 17946.782: 92.2196% ( 26) 00:09:32.729 17946.782 - 18047.606: 92.4872% ( 25) 00:09:32.729 18047.606 - 18148.431: 92.6905% ( 19) 00:09:32.729 18148.431 - 18249.255: 92.8510% ( 15) 00:09:32.729 18249.255 - 18350.080: 92.9902% ( 13) 00:09:32.729 18350.080 - 18450.905: 93.0972% ( 10) 00:09:32.729 18450.905 - 18551.729: 93.2256% ( 12) 00:09:32.729 18551.729 - 18652.554: 93.3861% ( 15) 00:09:32.729 18652.554 - 18753.378: 93.5146% ( 12) 00:09:32.729 18753.378 - 18854.203: 93.7179% ( 19) 00:09:32.729 18854.203 - 18955.028: 93.9747% ( 24) 00:09:32.729 18955.028 - 19055.852: 94.2209% ( 23) 00:09:32.729 19055.852 - 19156.677: 94.4563% ( 22) 00:09:32.729 19156.677 - 19257.502: 94.7239% ( 25) 00:09:32.729 19257.502 - 19358.326: 95.0985% ( 35) 00:09:32.729 19358.326 - 19459.151: 95.4944% ( 37) 00:09:32.729 19459.151 - 19559.975: 95.9225% ( 40) 00:09:32.729 19559.975 - 19660.800: 96.2650% ( 32) 00:09:32.729 19660.800 - 19761.625: 96.5325% ( 25) 00:09:32.729 19761.625 - 19862.449: 96.7787% ( 23) 00:09:32.729 19862.449 - 19963.274: 96.9606% ( 17) 00:09:32.729 19963.274 - 20064.098: 97.1640% ( 19) 00:09:32.729 20064.098 - 20164.923: 97.3887% ( 21) 00:09:32.729 20164.923 - 20265.748: 97.6455% ( 24) 00:09:32.729 20265.748 - 20366.572: 97.7847% ( 13) 00:09:32.729 20366.572 - 20467.397: 97.8596% ( 7) 00:09:32.729 20467.397 - 20568.222: 97.8810% ( 2) 00:09:32.729 20568.222 - 20669.046: 97.9131% ( 3) 00:09:32.729 20669.046 - 20769.871: 97.9452% ( 3) 00:09:32.729 21475.643 - 21576.468: 97.9666% ( 2) 00:09:32.729 21576.468 - 21677.292: 98.0522% ( 8) 00:09:32.729 21677.292 - 21778.117: 98.1164% ( 6) 00:09:32.729 21778.117 - 21878.942: 98.1485% ( 3) 00:09:32.729 21878.942 - 21979.766: 98.1807% ( 3) 00:09:32.729 21979.766 - 22080.591: 98.2235% ( 4) 00:09:32.729 22080.591 - 22181.415: 98.2556% ( 3) 00:09:32.729 22181.415 - 22282.240: 98.2984% ( 4) 00:09:32.729 22282.240 - 22383.065: 98.3305% ( 3) 00:09:32.729 22383.065 - 22483.889: 98.3733% ( 4) 00:09:32.729 22483.889 - 22584.714: 98.3947% ( 2) 00:09:32.729 22685.538 - 22786.363: 98.4803% ( 8) 00:09:32.729 22786.363 - 22887.188: 98.5445% ( 6) 00:09:32.729 22887.188 - 22988.012: 98.6087% ( 6) 00:09:32.729 22988.012 - 23088.837: 98.6301% ( 2) 00:09:32.729 24903.680 - 25004.505: 98.6408% ( 1) 00:09:32.729 25004.505 - 25105.329: 98.6729% ( 3) 00:09:32.729 25105.329 - 25206.154: 98.7051% ( 3) 00:09:32.729 25206.154 - 25306.978: 98.7265% ( 2) 00:09:32.729 25306.978 - 25407.803: 98.7586% ( 3) 00:09:32.729 25407.803 - 25508.628: 98.7907% ( 3) 00:09:32.729 25508.628 - 25609.452: 98.8228% ( 3) 00:09:32.729 25609.452 - 25710.277: 98.8549% ( 3) 00:09:32.729 25710.277 - 25811.102: 98.8870% ( 3) 00:09:32.729 25811.102 - 26012.751: 98.9512% ( 6) 00:09:32.729 26012.751 - 26214.400: 99.0154% ( 6) 00:09:32.729 26214.400 - 26416.049: 99.0796% ( 6) 00:09:32.729 26416.049 - 26617.698: 99.1438% ( 6) 00:09:32.729 26617.698 - 26819.348: 99.2080% ( 6) 00:09:32.729 26819.348 - 27020.997: 99.2830% ( 7) 00:09:32.729 27020.997 - 27222.646: 99.3151% ( 3) 00:09:32.729 32465.526 - 32667.175: 99.3258% ( 1) 00:09:32.729 32667.175 - 32868.825: 99.4007% ( 7) 00:09:32.729 32868.825 - 33070.474: 99.4863% ( 8) 00:09:32.729 33070.474 - 33272.123: 99.5612% ( 7) 00:09:32.729 33272.123 - 33473.772: 99.6468% ( 8) 00:09:32.729 33473.772 - 33675.422: 99.7324% ( 8) 00:09:32.729 33675.422 - 33877.071: 99.8074% ( 7) 00:09:32.729 33877.071 - 34078.720: 99.8930% ( 8) 00:09:32.729 34078.720 - 34280.369: 99.9786% ( 8) 00:09:32.729 34280.369 - 34482.018: 100.0000% ( 2) 00:09:32.729 00:09:32.729 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:32.729 ============================================================================== 00:09:32.729 Range in us Cumulative IO count 00:09:32.729 8570.092 - 8620.505: 0.0214% ( 2) 00:09:32.729 8620.505 - 8670.917: 0.0535% ( 3) 00:09:32.729 8670.917 - 8721.329: 0.0856% ( 3) 00:09:32.729 8721.329 - 8771.742: 0.1284% ( 4) 00:09:32.729 8771.742 - 8822.154: 0.1926% ( 6) 00:09:32.729 8822.154 - 8872.566: 0.2461% ( 5) 00:09:32.729 8872.566 - 8922.978: 0.3425% ( 9) 00:09:32.729 8922.978 - 8973.391: 0.4388% ( 9) 00:09:32.729 8973.391 - 9023.803: 0.4923% ( 5) 00:09:32.729 9023.803 - 9074.215: 0.5458% ( 5) 00:09:32.729 9074.215 - 9124.628: 0.5993% ( 5) 00:09:32.729 9124.628 - 9175.040: 0.6528% ( 5) 00:09:32.729 9175.040 - 9225.452: 0.6849% ( 3) 00:09:32.729 9275.865 - 9326.277: 0.7277% ( 4) 00:09:32.729 9326.277 - 9376.689: 0.7598% ( 3) 00:09:32.729 9376.689 - 9427.102: 0.7920% ( 3) 00:09:32.729 9427.102 - 9477.514: 0.8776% ( 8) 00:09:32.729 9477.514 - 9527.926: 1.0167% ( 13) 00:09:32.729 9527.926 - 9578.338: 1.1986% ( 17) 00:09:32.729 9578.338 - 9628.751: 1.5732% ( 35) 00:09:32.729 9628.751 - 9679.163: 1.8622% ( 27) 00:09:32.729 9679.163 - 9729.575: 2.2153% ( 33) 00:09:32.729 9729.575 - 9779.988: 2.6541% ( 41) 00:09:32.729 9779.988 - 9830.400: 3.2855% ( 59) 00:09:32.729 9830.400 - 9880.812: 4.0133% ( 68) 00:09:32.729 9880.812 - 9931.225: 4.7196% ( 66) 00:09:32.729 9931.225 - 9981.637: 5.2975% ( 54) 00:09:32.729 9981.637 - 10032.049: 5.9503% ( 61) 00:09:32.729 10032.049 - 10082.462: 6.6139% ( 62) 00:09:32.729 10082.462 - 10132.874: 7.1597% ( 51) 00:09:32.729 10132.874 - 10183.286: 7.9409% ( 73) 00:09:32.729 10183.286 - 10233.698: 8.4867% ( 51) 00:09:32.729 10233.698 - 10284.111: 9.1074% ( 58) 00:09:32.729 10284.111 - 10334.523: 9.6211% ( 48) 00:09:32.729 10334.523 - 10384.935: 10.0278% ( 38) 00:09:32.729 10384.935 - 10435.348: 10.4987% ( 44) 00:09:32.729 10435.348 - 10485.760: 11.1622% ( 62) 00:09:32.729 10485.760 - 10536.172: 11.7616% ( 56) 00:09:32.729 10536.172 - 10586.585: 12.2324% ( 44) 00:09:32.729 10586.585 - 10636.997: 12.9281% ( 65) 00:09:32.729 10636.997 - 10687.409: 13.4739% ( 51) 00:09:32.729 10687.409 - 10737.822: 13.9662% ( 46) 00:09:32.729 10737.822 - 10788.234: 14.5548% ( 55) 00:09:32.729 10788.234 - 10838.646: 15.1220% ( 53) 00:09:32.729 10838.646 - 10889.058: 15.5929% ( 44) 00:09:32.729 10889.058 - 10939.471: 16.1708% ( 54) 00:09:32.729 10939.471 - 10989.883: 16.6310% ( 43) 00:09:32.729 10989.883 - 11040.295: 17.0591% ( 40) 00:09:32.729 11040.295 - 11090.708: 17.4658% ( 38) 00:09:32.729 11090.708 - 11141.120: 17.8831% ( 39) 00:09:32.729 11141.120 - 11191.532: 18.4610% ( 54) 00:09:32.729 11191.532 - 11241.945: 19.2209% ( 71) 00:09:32.729 11241.945 - 11292.357: 20.1948% ( 91) 00:09:32.729 11292.357 - 11342.769: 21.3506% ( 108) 00:09:32.729 11342.769 - 11393.182: 22.2710% ( 86) 00:09:32.729 11393.182 - 11443.594: 23.1485% ( 82) 00:09:32.729 11443.594 - 11494.006: 23.9940% ( 79) 00:09:32.729 11494.006 - 11544.418: 25.0642% ( 100) 00:09:32.729 11544.418 - 11594.831: 26.1023% ( 97) 00:09:32.729 11594.831 - 11645.243: 27.2153% ( 104) 00:09:32.730 11645.243 - 11695.655: 28.5424% ( 124) 00:09:32.730 11695.655 - 11746.068: 29.5591% ( 95) 00:09:32.730 11746.068 - 11796.480: 30.4688% ( 85) 00:09:32.730 11796.480 - 11846.892: 32.0741% ( 150) 00:09:32.730 11846.892 - 11897.305: 33.2192% ( 107) 00:09:32.730 11897.305 - 11947.717: 34.1610% ( 88) 00:09:32.730 11947.717 - 11998.129: 35.1562% ( 93) 00:09:32.730 11998.129 - 12048.542: 36.3763% ( 114) 00:09:32.730 12048.542 - 12098.954: 37.5107% ( 106) 00:09:32.730 12098.954 - 12149.366: 38.4418% ( 87) 00:09:32.730 12149.366 - 12199.778: 39.5869% ( 107) 00:09:32.730 12199.778 - 12250.191: 40.8283% ( 116) 00:09:32.730 12250.191 - 12300.603: 41.9842% ( 108) 00:09:32.730 12300.603 - 12351.015: 42.8938% ( 85) 00:09:32.730 12351.015 - 12401.428: 44.0925% ( 112) 00:09:32.730 12401.428 - 12451.840: 45.1092% ( 95) 00:09:32.730 12451.840 - 12502.252: 45.9653% ( 80) 00:09:32.730 12502.252 - 12552.665: 46.6717% ( 66) 00:09:32.730 12552.665 - 12603.077: 47.5278% ( 80) 00:09:32.730 12603.077 - 12653.489: 48.2556% ( 68) 00:09:32.730 12653.489 - 12703.902: 48.9726% ( 67) 00:09:32.730 12703.902 - 12754.314: 49.6147% ( 60) 00:09:32.730 12754.314 - 12804.726: 50.2140% ( 56) 00:09:32.730 12804.726 - 12855.138: 50.7705% ( 52) 00:09:32.730 12855.138 - 12905.551: 51.1986% ( 40) 00:09:32.730 12905.551 - 13006.375: 52.0013% ( 75) 00:09:32.730 13006.375 - 13107.200: 52.9431% ( 88) 00:09:32.730 13107.200 - 13208.025: 53.9598% ( 95) 00:09:32.730 13208.025 - 13308.849: 54.5912% ( 59) 00:09:32.730 13308.849 - 13409.674: 55.0086% ( 39) 00:09:32.730 13409.674 - 13510.498: 55.3617% ( 33) 00:09:32.730 13510.498 - 13611.323: 55.8540% ( 46) 00:09:32.730 13611.323 - 13712.148: 56.5711% ( 67) 00:09:32.730 13712.148 - 13812.972: 57.4058% ( 78) 00:09:32.730 13812.972 - 13913.797: 58.1657% ( 71) 00:09:32.730 13913.797 - 14014.622: 59.1182% ( 89) 00:09:32.730 14014.622 - 14115.446: 60.3382% ( 114) 00:09:32.730 14115.446 - 14216.271: 61.3228% ( 92) 00:09:32.730 14216.271 - 14317.095: 62.4251% ( 103) 00:09:32.730 14317.095 - 14417.920: 63.4846% ( 99) 00:09:32.730 14417.920 - 14518.745: 65.2076% ( 161) 00:09:32.730 14518.745 - 14619.569: 66.2671% ( 99) 00:09:32.730 14619.569 - 14720.394: 67.0270% ( 71) 00:09:32.730 14720.394 - 14821.218: 67.9902% ( 90) 00:09:32.730 14821.218 - 14922.043: 68.9854% ( 93) 00:09:32.730 14922.043 - 15022.868: 70.0878% ( 103) 00:09:32.730 15022.868 - 15123.692: 70.9225% ( 78) 00:09:32.730 15123.692 - 15224.517: 72.0676% ( 107) 00:09:32.730 15224.517 - 15325.342: 73.7693% ( 159) 00:09:32.730 15325.342 - 15426.166: 75.2676% ( 140) 00:09:32.730 15426.166 - 15526.991: 76.6481% ( 129) 00:09:32.730 15526.991 - 15627.815: 78.1250% ( 138) 00:09:32.730 15627.815 - 15728.640: 79.2059% ( 101) 00:09:32.730 15728.640 - 15829.465: 80.0086% ( 75) 00:09:32.730 15829.465 - 15930.289: 80.6721% ( 62) 00:09:32.730 15930.289 - 16031.114: 81.2286% ( 52) 00:09:32.730 16031.114 - 16131.938: 82.0312% ( 75) 00:09:32.730 16131.938 - 16232.763: 83.0479% ( 95) 00:09:32.730 16232.763 - 16333.588: 84.1182% ( 100) 00:09:32.730 16333.588 - 16434.412: 85.6485% ( 143) 00:09:32.730 16434.412 - 16535.237: 86.6224% ( 91) 00:09:32.730 16535.237 - 16636.062: 87.1468% ( 49) 00:09:32.730 16636.062 - 16736.886: 87.6391% ( 46) 00:09:32.730 16736.886 - 16837.711: 88.1849% ( 51) 00:09:32.730 16837.711 - 16938.535: 88.6451% ( 43) 00:09:32.730 16938.535 - 17039.360: 88.9983% ( 33) 00:09:32.730 17039.360 - 17140.185: 89.2872% ( 27) 00:09:32.730 17140.185 - 17241.009: 89.6725% ( 36) 00:09:32.730 17241.009 - 17341.834: 90.0792% ( 38) 00:09:32.730 17341.834 - 17442.658: 90.4431% ( 34) 00:09:32.730 17442.658 - 17543.483: 90.8711% ( 40) 00:09:32.730 17543.483 - 17644.308: 91.1601% ( 27) 00:09:32.730 17644.308 - 17745.132: 91.4598% ( 28) 00:09:32.730 17745.132 - 17845.957: 91.8664% ( 38) 00:09:32.730 17845.957 - 17946.782: 92.2838% ( 39) 00:09:32.730 17946.782 - 18047.606: 92.6584% ( 35) 00:09:32.730 18047.606 - 18148.431: 92.8403% ( 17) 00:09:32.730 18148.431 - 18249.255: 92.9902% ( 14) 00:09:32.730 18249.255 - 18350.080: 93.1079% ( 11) 00:09:32.730 18350.080 - 18450.905: 93.2149% ( 10) 00:09:32.730 18450.905 - 18551.729: 93.3968% ( 17) 00:09:32.730 18551.729 - 18652.554: 93.6430% ( 23) 00:09:32.730 18652.554 - 18753.378: 93.9533% ( 29) 00:09:32.730 18753.378 - 18854.203: 94.2316% ( 26) 00:09:32.730 18854.203 - 18955.028: 94.4563% ( 21) 00:09:32.730 18955.028 - 19055.852: 94.7667% ( 29) 00:09:32.730 19055.852 - 19156.677: 95.0771% ( 29) 00:09:32.730 19156.677 - 19257.502: 95.4195% ( 32) 00:09:32.730 19257.502 - 19358.326: 95.7513% ( 31) 00:09:32.730 19358.326 - 19459.151: 96.0938% ( 32) 00:09:32.730 19459.151 - 19559.975: 96.3720% ( 26) 00:09:32.730 19559.975 - 19660.800: 96.6289% ( 24) 00:09:32.730 19660.800 - 19761.625: 96.8322% ( 19) 00:09:32.730 19761.625 - 19862.449: 96.9927% ( 15) 00:09:32.730 19862.449 - 19963.274: 97.1104% ( 11) 00:09:32.730 19963.274 - 20064.098: 97.2068% ( 9) 00:09:32.730 20064.098 - 20164.923: 97.3138% ( 10) 00:09:32.730 20164.923 - 20265.748: 97.3887% ( 7) 00:09:32.730 20265.748 - 20366.572: 97.4101% ( 2) 00:09:32.730 20366.572 - 20467.397: 97.4422% ( 3) 00:09:32.730 20467.397 - 20568.222: 97.4529% ( 1) 00:09:32.730 20568.222 - 20669.046: 97.4957% ( 4) 00:09:32.730 20769.871 - 20870.695: 97.5171% ( 2) 00:09:32.730 20870.695 - 20971.520: 97.5492% ( 3) 00:09:32.730 20971.520 - 21072.345: 97.5706% ( 2) 00:09:32.730 21173.169 - 21273.994: 97.5920% ( 2) 00:09:32.730 21273.994 - 21374.818: 97.6562% ( 6) 00:09:32.730 21374.818 - 21475.643: 97.7526% ( 9) 00:09:32.730 21475.643 - 21576.468: 97.8703% ( 11) 00:09:32.730 21576.468 - 21677.292: 97.9666% ( 9) 00:09:32.730 21677.292 - 21778.117: 98.0308% ( 6) 00:09:32.730 21778.117 - 21878.942: 98.1057% ( 7) 00:09:32.730 21878.942 - 21979.766: 98.1592% ( 5) 00:09:32.730 21979.766 - 22080.591: 98.2342% ( 7) 00:09:32.730 22080.591 - 22181.415: 98.2984% ( 6) 00:09:32.730 22181.415 - 22282.240: 98.3947% ( 9) 00:09:32.730 22282.240 - 22383.065: 98.4803% ( 8) 00:09:32.730 22383.065 - 22483.889: 98.5338% ( 5) 00:09:32.730 22483.889 - 22584.714: 98.5766% ( 4) 00:09:32.730 22584.714 - 22685.538: 98.6194% ( 4) 00:09:32.730 22685.538 - 22786.363: 98.6301% ( 1) 00:09:32.730 23895.434 - 23996.258: 98.6408% ( 1) 00:09:32.730 23996.258 - 24097.083: 98.6622% ( 2) 00:09:32.730 24097.083 - 24197.908: 98.7158% ( 5) 00:09:32.730 24197.908 - 24298.732: 98.7800% ( 6) 00:09:32.730 24298.732 - 24399.557: 98.8335% ( 5) 00:09:32.730 24399.557 - 24500.382: 98.8656% ( 3) 00:09:32.730 24500.382 - 24601.206: 98.9084% ( 4) 00:09:32.730 24601.206 - 24702.031: 98.9619% ( 5) 00:09:32.730 24702.031 - 24802.855: 98.9940% ( 3) 00:09:32.730 24802.855 - 24903.680: 99.0261% ( 3) 00:09:32.730 24903.680 - 25004.505: 99.0582% ( 3) 00:09:32.730 25004.505 - 25105.329: 99.0796% ( 2) 00:09:32.730 25105.329 - 25206.154: 99.1117% ( 3) 00:09:32.730 25206.154 - 25306.978: 99.1438% ( 3) 00:09:32.730 25306.978 - 25407.803: 99.1652% ( 2) 00:09:32.730 25407.803 - 25508.628: 99.1973% ( 3) 00:09:32.730 25508.628 - 25609.452: 99.2188% ( 2) 00:09:32.730 25609.452 - 25710.277: 99.2509% ( 3) 00:09:32.730 25710.277 - 25811.102: 99.2830% ( 3) 00:09:32.730 25811.102 - 26012.751: 99.3151% ( 3) 00:09:32.730 29844.086 - 30045.735: 99.3365% ( 2) 00:09:32.730 30045.735 - 30247.385: 99.3900% ( 5) 00:09:32.730 30247.385 - 30449.034: 99.4007% ( 1) 00:09:32.730 31053.982 - 31255.631: 99.4328% ( 3) 00:09:32.730 31255.631 - 31457.280: 99.4970% ( 6) 00:09:32.730 31457.280 - 31658.929: 99.5612% ( 6) 00:09:32.730 31658.929 - 31860.578: 99.6254% ( 6) 00:09:32.730 31860.578 - 32062.228: 99.6896% ( 6) 00:09:32.730 32062.228 - 32263.877: 99.7646% ( 7) 00:09:32.730 32263.877 - 32465.526: 99.8502% ( 8) 00:09:32.730 32465.526 - 32667.175: 99.9251% ( 7) 00:09:32.730 32667.175 - 32868.825: 100.0000% ( 7) 00:09:32.730 00:09:32.730 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:32.730 ============================================================================== 00:09:32.730 Range in us Cumulative IO count 00:09:32.730 8670.917 - 8721.329: 0.0214% ( 2) 00:09:32.730 8721.329 - 8771.742: 0.0535% ( 3) 00:09:32.730 8771.742 - 8822.154: 0.0963% ( 4) 00:09:32.730 8822.154 - 8872.566: 0.1177% ( 2) 00:09:32.730 8872.566 - 8922.978: 0.1498% ( 3) 00:09:32.730 8922.978 - 8973.391: 0.2140% ( 6) 00:09:32.730 8973.391 - 9023.803: 0.3639% ( 14) 00:09:32.730 9023.803 - 9074.215: 0.5244% ( 15) 00:09:32.730 9074.215 - 9124.628: 0.6421% ( 11) 00:09:32.730 9124.628 - 9175.040: 0.6849% ( 4) 00:09:32.730 9175.040 - 9225.452: 0.6956% ( 1) 00:09:32.730 9225.452 - 9275.865: 0.7063% ( 1) 00:09:32.730 9275.865 - 9326.277: 0.7384% ( 3) 00:09:32.730 9326.277 - 9376.689: 0.7598% ( 2) 00:09:32.730 9376.689 - 9427.102: 0.8134% ( 5) 00:09:32.730 9427.102 - 9477.514: 0.9311% ( 11) 00:09:32.730 9477.514 - 9527.926: 1.0916% ( 15) 00:09:32.730 9527.926 - 9578.338: 1.2949% ( 19) 00:09:32.730 9578.338 - 9628.751: 1.6481% ( 33) 00:09:32.730 9628.751 - 9679.163: 1.9157% ( 25) 00:09:32.730 9679.163 - 9729.575: 2.4829% ( 53) 00:09:32.730 9729.575 - 9779.988: 2.8789% ( 37) 00:09:32.730 9779.988 - 9830.400: 3.4033% ( 49) 00:09:32.730 9830.400 - 9880.812: 3.9062% ( 47) 00:09:32.730 9880.812 - 9931.225: 4.5912% ( 64) 00:09:32.730 9931.225 - 9981.637: 5.3617% ( 72) 00:09:32.730 9981.637 - 10032.049: 6.1858% ( 77) 00:09:32.730 10032.049 - 10082.462: 6.8814% ( 65) 00:09:32.730 10082.462 - 10132.874: 7.6520% ( 72) 00:09:32.730 10132.874 - 10183.286: 8.3155% ( 62) 00:09:32.730 10183.286 - 10233.698: 8.9362% ( 58) 00:09:32.730 10233.698 - 10284.111: 9.5248% ( 55) 00:09:32.730 10284.111 - 10334.523: 10.0920% ( 53) 00:09:32.731 10334.523 - 10384.935: 10.7021% ( 57) 00:09:32.731 10384.935 - 10435.348: 11.3656% ( 62) 00:09:32.731 10435.348 - 10485.760: 12.0077% ( 60) 00:09:32.731 10485.760 - 10536.172: 12.3716% ( 34) 00:09:32.731 10536.172 - 10586.585: 12.7997% ( 40) 00:09:32.731 10586.585 - 10636.997: 13.2170% ( 39) 00:09:32.731 10636.997 - 10687.409: 13.6344% ( 39) 00:09:32.731 10687.409 - 10737.822: 14.0732% ( 41) 00:09:32.731 10737.822 - 10788.234: 14.4157% ( 32) 00:09:32.731 10788.234 - 10838.646: 14.7046% ( 27) 00:09:32.731 10838.646 - 10889.058: 14.9401% ( 22) 00:09:32.731 10889.058 - 10939.471: 15.3682% ( 40) 00:09:32.731 10939.471 - 10989.883: 15.6571% ( 27) 00:09:32.731 10989.883 - 11040.295: 16.0210% ( 34) 00:09:32.731 11040.295 - 11090.708: 16.6096% ( 55) 00:09:32.731 11090.708 - 11141.120: 17.2945% ( 64) 00:09:32.731 11141.120 - 11191.532: 18.0223% ( 68) 00:09:32.731 11191.532 - 11241.945: 18.9319% ( 85) 00:09:32.731 11241.945 - 11292.357: 19.8844% ( 89) 00:09:32.731 11292.357 - 11342.769: 20.9011% ( 95) 00:09:32.731 11342.769 - 11393.182: 22.0355% ( 106) 00:09:32.731 11393.182 - 11443.594: 22.8382% ( 75) 00:09:32.731 11443.594 - 11494.006: 23.9726% ( 106) 00:09:32.731 11494.006 - 11544.418: 25.1177% ( 107) 00:09:32.731 11544.418 - 11594.831: 26.1344% ( 95) 00:09:32.731 11594.831 - 11645.243: 27.1404% ( 94) 00:09:32.731 11645.243 - 11695.655: 28.0929% ( 89) 00:09:32.731 11695.655 - 11746.068: 29.3022% ( 113) 00:09:32.731 11746.068 - 11796.480: 30.5865% ( 120) 00:09:32.731 11796.480 - 11846.892: 31.4640% ( 82) 00:09:32.731 11846.892 - 11897.305: 32.5235% ( 99) 00:09:32.731 11897.305 - 11947.717: 33.3690% ( 79) 00:09:32.731 11947.717 - 11998.129: 34.3536% ( 92) 00:09:32.731 11998.129 - 12048.542: 35.2633% ( 85) 00:09:32.731 12048.542 - 12098.954: 36.6224% ( 127) 00:09:32.731 12098.954 - 12149.366: 37.9067% ( 120) 00:09:32.731 12149.366 - 12199.778: 39.0518% ( 107) 00:09:32.731 12199.778 - 12250.191: 40.1648% ( 104) 00:09:32.731 12250.191 - 12300.603: 41.4705% ( 122) 00:09:32.731 12300.603 - 12351.015: 42.6905% ( 114) 00:09:32.731 12351.015 - 12401.428: 43.6858% ( 93) 00:09:32.731 12401.428 - 12451.840: 44.6062% ( 86) 00:09:32.731 12451.840 - 12502.252: 45.4837% ( 82) 00:09:32.731 12502.252 - 12552.665: 46.3078% ( 77) 00:09:32.731 12552.665 - 12603.077: 47.1961% ( 83) 00:09:32.731 12603.077 - 12653.489: 47.7633% ( 53) 00:09:32.731 12653.489 - 12703.902: 48.4268% ( 62) 00:09:32.731 12703.902 - 12754.314: 48.9298% ( 47) 00:09:32.731 12754.314 - 12804.726: 49.5291% ( 56) 00:09:32.731 12804.726 - 12855.138: 50.1498% ( 58) 00:09:32.731 12855.138 - 12905.551: 50.6849% ( 50) 00:09:32.731 12905.551 - 13006.375: 51.7658% ( 101) 00:09:32.731 13006.375 - 13107.200: 52.9966% ( 115) 00:09:32.731 13107.200 - 13208.025: 53.5531% ( 52) 00:09:32.731 13208.025 - 13308.849: 54.0989% ( 51) 00:09:32.731 13308.849 - 13409.674: 54.7624% ( 62) 00:09:32.731 13409.674 - 13510.498: 55.4045% ( 60) 00:09:32.731 13510.498 - 13611.323: 55.9717% ( 53) 00:09:32.731 13611.323 - 13712.148: 56.6246% ( 61) 00:09:32.731 13712.148 - 13812.972: 57.2025% ( 54) 00:09:32.731 13812.972 - 13913.797: 57.7697% ( 53) 00:09:32.731 13913.797 - 14014.622: 58.7650% ( 93) 00:09:32.731 14014.622 - 14115.446: 59.8138% ( 98) 00:09:32.731 14115.446 - 14216.271: 60.8305% ( 95) 00:09:32.731 14216.271 - 14317.095: 61.9114% ( 101) 00:09:32.731 14317.095 - 14417.920: 63.2491% ( 125) 00:09:32.731 14417.920 - 14518.745: 64.2551% ( 94) 00:09:32.731 14518.745 - 14619.569: 65.1969% ( 88) 00:09:32.731 14619.569 - 14720.394: 66.5454% ( 126) 00:09:32.731 14720.394 - 14821.218: 67.9580% ( 132) 00:09:32.731 14821.218 - 14922.043: 68.9640% ( 94) 00:09:32.731 14922.043 - 15022.868: 70.0235% ( 99) 00:09:32.731 15022.868 - 15123.692: 71.2222% ( 112) 00:09:32.731 15123.692 - 15224.517: 72.4101% ( 111) 00:09:32.731 15224.517 - 15325.342: 73.9191% ( 141) 00:09:32.731 15325.342 - 15426.166: 75.0535% ( 106) 00:09:32.731 15426.166 - 15526.991: 76.2628% ( 113) 00:09:32.731 15526.991 - 15627.815: 77.7076% ( 135) 00:09:32.731 15627.815 - 15728.640: 78.8848% ( 110) 00:09:32.731 15728.640 - 15829.465: 79.9872% ( 103) 00:09:32.731 15829.465 - 15930.289: 81.3142% ( 124) 00:09:32.731 15930.289 - 16031.114: 82.2667% ( 89) 00:09:32.731 16031.114 - 16131.938: 83.2085% ( 88) 00:09:32.731 16131.938 - 16232.763: 83.8720% ( 62) 00:09:32.731 16232.763 - 16333.588: 84.6533% ( 73) 00:09:32.731 16333.588 - 16434.412: 85.4238% ( 72) 00:09:32.731 16434.412 - 16535.237: 86.1836% ( 71) 00:09:32.731 16535.237 - 16636.062: 86.8686% ( 64) 00:09:32.731 16636.062 - 16736.886: 87.3395% ( 44) 00:09:32.731 16736.886 - 16837.711: 87.8211% ( 45) 00:09:32.731 16837.711 - 16938.535: 88.3562% ( 50) 00:09:32.731 16938.535 - 17039.360: 88.8913% ( 50) 00:09:32.731 17039.360 - 17140.185: 89.4692% ( 54) 00:09:32.731 17140.185 - 17241.009: 90.0685% ( 56) 00:09:32.731 17241.009 - 17341.834: 90.4859% ( 39) 00:09:32.731 17341.834 - 17442.658: 90.8818% ( 37) 00:09:32.731 17442.658 - 17543.483: 91.1601% ( 26) 00:09:32.731 17543.483 - 17644.308: 91.3741% ( 20) 00:09:32.731 17644.308 - 17745.132: 91.6417% ( 25) 00:09:32.731 17745.132 - 17845.957: 91.8450% ( 19) 00:09:32.731 17845.957 - 17946.782: 92.0912% ( 23) 00:09:32.731 17946.782 - 18047.606: 92.3694% ( 26) 00:09:32.731 18047.606 - 18148.431: 92.6798% ( 29) 00:09:32.731 18148.431 - 18249.255: 92.9580% ( 26) 00:09:32.731 18249.255 - 18350.080: 93.2898% ( 31) 00:09:32.731 18350.080 - 18450.905: 93.6323% ( 32) 00:09:32.731 18450.905 - 18551.729: 94.0604% ( 40) 00:09:32.731 18551.729 - 18652.554: 94.4456% ( 36) 00:09:32.731 18652.554 - 18753.378: 94.7667% ( 30) 00:09:32.731 18753.378 - 18854.203: 95.1306% ( 34) 00:09:32.731 18854.203 - 18955.028: 95.4195% ( 27) 00:09:32.731 18955.028 - 19055.852: 95.6550% ( 22) 00:09:32.731 19055.852 - 19156.677: 95.8797% ( 21) 00:09:32.731 19156.677 - 19257.502: 96.0938% ( 20) 00:09:32.731 19257.502 - 19358.326: 96.2864% ( 18) 00:09:32.731 19358.326 - 19459.151: 96.3827% ( 9) 00:09:32.731 19459.151 - 19559.975: 96.5218% ( 13) 00:09:32.731 19559.975 - 19660.800: 96.6717% ( 14) 00:09:32.731 19660.800 - 19761.625: 96.7894% ( 11) 00:09:32.731 19761.625 - 19862.449: 96.8964% ( 10) 00:09:32.731 19862.449 - 19963.274: 96.9820% ( 8) 00:09:32.731 19963.274 - 20064.098: 97.0783% ( 9) 00:09:32.731 20064.098 - 20164.923: 97.1318% ( 5) 00:09:32.731 20164.923 - 20265.748: 97.1854% ( 5) 00:09:32.731 20265.748 - 20366.572: 97.2282% ( 4) 00:09:32.731 20366.572 - 20467.397: 97.2603% ( 3) 00:09:32.731 20769.871 - 20870.695: 97.3138% ( 5) 00:09:32.731 20870.695 - 20971.520: 97.4743% ( 15) 00:09:32.731 20971.520 - 21072.345: 97.6241% ( 14) 00:09:32.731 21072.345 - 21173.169: 97.7740% ( 14) 00:09:32.731 21173.169 - 21273.994: 97.9452% ( 16) 00:09:32.731 21273.994 - 21374.818: 98.1057% ( 15) 00:09:32.731 21374.818 - 21475.643: 98.2449% ( 13) 00:09:32.731 21475.643 - 21576.468: 98.3840% ( 13) 00:09:32.731 21576.468 - 21677.292: 98.5124% ( 12) 00:09:32.731 21677.292 - 21778.117: 98.5980% ( 8) 00:09:32.731 21778.117 - 21878.942: 98.6301% ( 3) 00:09:32.731 23088.837 - 23189.662: 98.6836% ( 5) 00:09:32.731 23189.662 - 23290.486: 98.7372% ( 5) 00:09:32.731 23290.486 - 23391.311: 98.8014% ( 6) 00:09:32.731 23391.311 - 23492.135: 98.8442% ( 4) 00:09:32.731 23492.135 - 23592.960: 98.8977% ( 5) 00:09:32.731 23592.960 - 23693.785: 98.9512% ( 5) 00:09:32.731 23693.785 - 23794.609: 98.9940% ( 4) 00:09:32.731 23794.609 - 23895.434: 99.0154% ( 2) 00:09:32.731 23895.434 - 23996.258: 99.0475% ( 3) 00:09:32.731 23996.258 - 24097.083: 99.0796% ( 3) 00:09:32.731 24097.083 - 24197.908: 99.1010% ( 2) 00:09:32.731 24197.908 - 24298.732: 99.1331% ( 3) 00:09:32.731 24298.732 - 24399.557: 99.1652% ( 3) 00:09:32.731 24399.557 - 24500.382: 99.1973% ( 3) 00:09:32.731 24500.382 - 24601.206: 99.2295% ( 3) 00:09:32.731 24601.206 - 24702.031: 99.2616% ( 3) 00:09:32.731 24702.031 - 24802.855: 99.2937% ( 3) 00:09:32.731 24802.855 - 24903.680: 99.3151% ( 2) 00:09:32.731 28634.191 - 28835.840: 99.3900% ( 7) 00:09:32.731 28835.840 - 29037.489: 99.6147% ( 21) 00:09:32.731 29844.086 - 30045.735: 99.6575% ( 4) 00:09:32.731 30045.735 - 30247.385: 99.7324% ( 7) 00:09:32.731 30247.385 - 30449.034: 99.8181% ( 8) 00:09:32.731 30449.034 - 30650.683: 99.8930% ( 7) 00:09:32.731 30650.683 - 30852.332: 99.9786% ( 8) 00:09:32.731 30852.332 - 31053.982: 100.0000% ( 2) 00:09:32.731 00:09:32.731 19:27:59 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:32.731 00:09:32.731 real 0m2.547s 00:09:32.731 user 0m2.182s 00:09:32.731 sys 0m0.237s 00:09:32.731 19:27:59 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.731 19:27:59 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:32.731 ************************************ 00:09:32.731 END TEST nvme_perf 00:09:32.731 ************************************ 00:09:32.731 19:27:59 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:32.731 19:27:59 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:32.731 19:27:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.731 19:27:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:32.731 ************************************ 00:09:32.731 START TEST nvme_hello_world 00:09:32.731 ************************************ 00:09:32.731 19:27:59 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:32.731 Initializing NVMe Controllers 00:09:32.731 Attached to 0000:00:13.0 00:09:32.731 Namespace ID: 1 size: 1GB 00:09:32.731 Attached to 0000:00:10.0 00:09:32.731 Namespace ID: 1 size: 6GB 00:09:32.731 Attached to 0000:00:11.0 00:09:32.731 Namespace ID: 1 size: 5GB 00:09:32.731 Attached to 0000:00:12.0 00:09:32.731 Namespace ID: 1 size: 4GB 00:09:32.731 Namespace ID: 2 size: 4GB 00:09:32.731 Namespace ID: 3 size: 4GB 00:09:32.732 Initialization complete. 00:09:32.732 INFO: using host memory buffer for IO 00:09:32.732 Hello world! 00:09:32.732 INFO: using host memory buffer for IO 00:09:32.732 Hello world! 00:09:32.732 INFO: using host memory buffer for IO 00:09:32.732 Hello world! 00:09:32.732 INFO: using host memory buffer for IO 00:09:32.732 Hello world! 00:09:32.732 INFO: using host memory buffer for IO 00:09:32.732 Hello world! 00:09:32.732 INFO: using host memory buffer for IO 00:09:32.732 Hello world! 00:09:32.732 00:09:32.732 real 0m0.245s 00:09:32.732 user 0m0.081s 00:09:32.732 sys 0m0.120s 00:09:32.732 19:27:59 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.732 ************************************ 00:09:32.732 END TEST nvme_hello_world 00:09:32.732 ************************************ 00:09:32.732 19:27:59 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:32.732 19:27:59 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:32.732 19:27:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.732 19:27:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.732 19:27:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:32.993 ************************************ 00:09:32.993 START TEST nvme_sgl 00:09:32.993 ************************************ 00:09:32.993 19:27:59 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:32.993 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:32.993 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:32.993 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:32.993 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:32.993 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:32.993 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:32.993 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:32.993 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:32.993 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:32.993 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:32.993 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:32.993 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:32.993 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:32.993 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:32.993 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:32.993 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:32.993 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:32.993 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:32.993 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:32.993 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:32.993 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:33.253 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:33.253 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:33.253 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:33.253 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:33.253 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:33.253 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:33.253 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:33.253 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:33.253 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:33.253 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:33.253 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:33.253 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:33.253 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:33.253 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:33.253 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:33.253 NVMe Readv/Writev Request test 00:09:33.253 Attached to 0000:00:13.0 00:09:33.254 Attached to 0000:00:10.0 00:09:33.254 Attached to 0000:00:11.0 00:09:33.254 Attached to 0000:00:12.0 00:09:33.254 0000:00:10.0: build_io_request_2 test passed 00:09:33.254 0000:00:10.0: build_io_request_4 test passed 00:09:33.254 0000:00:10.0: build_io_request_5 test passed 00:09:33.254 0000:00:10.0: build_io_request_6 test passed 00:09:33.254 0000:00:10.0: build_io_request_7 test passed 00:09:33.254 0000:00:10.0: build_io_request_10 test passed 00:09:33.254 0000:00:11.0: build_io_request_2 test passed 00:09:33.254 0000:00:11.0: build_io_request_4 test passed 00:09:33.254 0000:00:11.0: build_io_request_5 test passed 00:09:33.254 0000:00:11.0: build_io_request_6 test passed 00:09:33.254 0000:00:11.0: build_io_request_7 test passed 00:09:33.254 0000:00:11.0: build_io_request_10 test passed 00:09:33.254 Cleaning up... 00:09:33.254 00:09:33.254 real 0m0.320s 00:09:33.254 user 0m0.160s 00:09:33.254 sys 0m0.108s 00:09:33.254 19:28:00 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.254 ************************************ 00:09:33.254 END TEST nvme_sgl 00:09:33.254 ************************************ 00:09:33.254 19:28:00 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:33.254 19:28:00 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:33.254 19:28:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.254 19:28:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.254 19:28:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.254 ************************************ 00:09:33.254 START TEST nvme_e2edp 00:09:33.254 ************************************ 00:09:33.254 19:28:00 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:33.515 NVMe Write/Read with End-to-End data protection test 00:09:33.515 Attached to 0000:00:13.0 00:09:33.515 Attached to 0000:00:10.0 00:09:33.515 Attached to 0000:00:11.0 00:09:33.515 Attached to 0000:00:12.0 00:09:33.515 Cleaning up... 00:09:33.515 00:09:33.515 real 0m0.224s 00:09:33.515 user 0m0.088s 00:09:33.515 sys 0m0.091s 00:09:33.515 19:28:00 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.515 19:28:00 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:33.515 ************************************ 00:09:33.515 END TEST nvme_e2edp 00:09:33.515 ************************************ 00:09:33.515 19:28:00 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:33.515 19:28:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.515 19:28:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.515 19:28:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.515 ************************************ 00:09:33.515 START TEST nvme_reserve 00:09:33.515 ************************************ 00:09:33.515 19:28:00 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:33.776 ===================================================== 00:09:33.776 NVMe Controller at PCI bus 0, device 19, function 0 00:09:33.776 ===================================================== 00:09:33.776 Reservations: Not Supported 00:09:33.776 ===================================================== 00:09:33.776 NVMe Controller at PCI bus 0, device 16, function 0 00:09:33.776 ===================================================== 00:09:33.776 Reservations: Not Supported 00:09:33.776 ===================================================== 00:09:33.776 NVMe Controller at PCI bus 0, device 17, function 0 00:09:33.776 ===================================================== 00:09:33.776 Reservations: Not Supported 00:09:33.776 ===================================================== 00:09:33.776 NVMe Controller at PCI bus 0, device 18, function 0 00:09:33.776 ===================================================== 00:09:33.776 Reservations: Not Supported 00:09:33.776 Reservation test passed 00:09:33.776 00:09:33.776 real 0m0.227s 00:09:33.776 user 0m0.083s 00:09:33.776 sys 0m0.101s 00:09:33.776 19:28:00 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.776 ************************************ 00:09:33.776 END TEST nvme_reserve 00:09:33.776 ************************************ 00:09:33.776 19:28:00 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:33.776 19:28:00 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:33.776 19:28:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.776 19:28:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.776 19:28:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.776 ************************************ 00:09:33.776 START TEST nvme_err_injection 00:09:33.776 ************************************ 00:09:33.776 19:28:00 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:34.037 NVMe Error Injection test 00:09:34.037 Attached to 0000:00:13.0 00:09:34.037 Attached to 0000:00:10.0 00:09:34.037 Attached to 0000:00:11.0 00:09:34.037 Attached to 0000:00:12.0 00:09:34.037 0000:00:13.0: get features failed as expected 00:09:34.037 0000:00:10.0: get features failed as expected 00:09:34.037 0000:00:11.0: get features failed as expected 00:09:34.037 0000:00:12.0: get features failed as expected 00:09:34.037 0000:00:13.0: get features successfully as expected 00:09:34.037 0000:00:10.0: get features successfully as expected 00:09:34.037 0000:00:11.0: get features successfully as expected 00:09:34.037 0000:00:12.0: get features successfully as expected 00:09:34.037 0000:00:12.0: read failed as expected 00:09:34.037 0000:00:13.0: read failed as expected 00:09:34.037 0000:00:10.0: read failed as expected 00:09:34.037 0000:00:11.0: read failed as expected 00:09:34.037 0000:00:12.0: read successfully as expected 00:09:34.037 0000:00:13.0: read successfully as expected 00:09:34.037 0000:00:10.0: read successfully as expected 00:09:34.037 0000:00:11.0: read successfully as expected 00:09:34.037 Cleaning up... 00:09:34.037 00:09:34.037 real 0m0.219s 00:09:34.037 user 0m0.076s 00:09:34.037 sys 0m0.101s 00:09:34.037 19:28:01 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.037 ************************************ 00:09:34.037 19:28:01 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:34.037 END TEST nvme_err_injection 00:09:34.037 ************************************ 00:09:34.037 19:28:01 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:34.037 19:28:01 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:09:34.037 19:28:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.037 19:28:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:34.037 ************************************ 00:09:34.037 START TEST nvme_overhead 00:09:34.037 ************************************ 00:09:34.037 19:28:01 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:35.472 Initializing NVMe Controllers 00:09:35.472 Attached to 0000:00:13.0 00:09:35.472 Attached to 0000:00:10.0 00:09:35.472 Attached to 0000:00:11.0 00:09:35.472 Attached to 0000:00:12.0 00:09:35.472 Initialization complete. Launching workers. 00:09:35.472 submit (in ns) avg, min, max = 12391.0, 10627.7, 134263.1 00:09:35.472 complete (in ns) avg, min, max = 8083.3, 7266.2, 78062.3 00:09:35.472 00:09:35.472 Submit histogram 00:09:35.472 ================ 00:09:35.472 Range in us Cumulative Count 00:09:35.472 10.585 - 10.634: 0.0254% ( 1) 00:09:35.472 10.634 - 10.683: 0.0508% ( 1) 00:09:35.472 10.683 - 10.732: 0.1269% ( 3) 00:09:35.472 10.880 - 10.929: 0.2031% ( 3) 00:09:35.472 10.929 - 10.978: 0.3554% ( 6) 00:09:35.472 10.978 - 11.028: 0.5585% ( 8) 00:09:35.472 11.028 - 11.077: 0.7616% ( 8) 00:09:35.472 11.077 - 11.126: 1.4471% ( 27) 00:09:35.472 11.126 - 11.175: 2.3356% ( 35) 00:09:35.472 11.175 - 11.225: 3.8588% ( 60) 00:09:35.472 11.225 - 11.274: 6.8545% ( 118) 00:09:35.472 11.274 - 11.323: 10.6626% ( 150) 00:09:35.472 11.323 - 11.372: 16.1208% ( 215) 00:09:35.472 11.372 - 11.422: 23.0008% ( 271) 00:09:35.472 11.422 - 11.471: 30.5661% ( 298) 00:09:35.472 11.471 - 11.520: 36.5575% ( 236) 00:09:35.472 11.520 - 11.569: 43.5136% ( 274) 00:09:35.472 11.569 - 11.618: 48.5910% ( 200) 00:09:35.472 11.618 - 11.668: 53.0845% ( 177) 00:09:35.472 11.668 - 11.717: 57.1211% ( 159) 00:09:35.472 11.717 - 11.766: 60.9292% ( 150) 00:09:35.472 11.766 - 11.815: 63.8995% ( 117) 00:09:35.472 11.815 - 11.865: 66.6667% ( 109) 00:09:35.472 11.865 - 11.914: 69.3069% ( 104) 00:09:35.472 11.914 - 11.963: 71.5918% ( 90) 00:09:35.472 11.963 - 12.012: 73.3943% ( 71) 00:09:35.472 12.012 - 12.062: 75.2983% ( 75) 00:09:35.472 12.062 - 12.111: 76.8469% ( 61) 00:09:35.472 12.111 - 12.160: 78.2432% ( 55) 00:09:35.472 12.160 - 12.209: 79.4872% ( 49) 00:09:35.472 12.209 - 12.258: 80.7819% ( 51) 00:09:35.472 12.258 - 12.308: 81.7974% ( 40) 00:09:35.472 12.308 - 12.357: 82.2290% ( 17) 00:09:35.472 12.357 - 12.406: 82.9906% ( 30) 00:09:35.472 12.406 - 12.455: 83.5237% ( 21) 00:09:35.472 12.455 - 12.505: 83.9807% ( 18) 00:09:35.472 12.505 - 12.554: 84.2854% ( 12) 00:09:35.472 12.554 - 12.603: 84.6154% ( 13) 00:09:35.472 12.603 - 12.702: 85.0977% ( 19) 00:09:35.472 12.702 - 12.800: 85.5547% ( 18) 00:09:35.472 12.800 - 12.898: 86.0371% ( 19) 00:09:35.472 12.898 - 12.997: 86.2909% ( 10) 00:09:35.472 12.997 - 13.095: 86.6210% ( 13) 00:09:35.472 13.095 - 13.194: 86.6717% ( 2) 00:09:35.472 13.194 - 13.292: 86.9510% ( 11) 00:09:35.472 13.292 - 13.391: 87.0779% ( 5) 00:09:35.472 13.391 - 13.489: 87.1795% ( 4) 00:09:35.472 13.489 - 13.588: 87.3318% ( 6) 00:09:35.473 13.588 - 13.686: 87.6618% ( 13) 00:09:35.473 13.686 - 13.785: 87.7634% ( 4) 00:09:35.473 13.785 - 13.883: 88.0173% ( 10) 00:09:35.473 13.883 - 13.982: 88.1950% ( 7) 00:09:35.473 13.982 - 14.080: 88.6519% ( 18) 00:09:35.473 14.080 - 14.178: 88.8043% ( 6) 00:09:35.473 14.178 - 14.277: 89.1851% ( 15) 00:09:35.473 14.277 - 14.375: 89.4643% ( 11) 00:09:35.473 14.375 - 14.474: 89.7944% ( 13) 00:09:35.473 14.474 - 14.572: 89.9975% ( 8) 00:09:35.473 14.572 - 14.671: 90.4544% ( 18) 00:09:35.473 14.671 - 14.769: 90.6068% ( 6) 00:09:35.473 14.769 - 14.868: 90.8099% ( 8) 00:09:35.473 14.868 - 14.966: 90.9876% ( 7) 00:09:35.473 14.966 - 15.065: 91.2668% ( 11) 00:09:35.473 15.065 - 15.163: 91.5207% ( 10) 00:09:35.473 15.163 - 15.262: 91.8507% ( 13) 00:09:35.473 15.262 - 15.360: 92.1046% ( 10) 00:09:35.473 15.360 - 15.458: 92.2823% ( 7) 00:09:35.473 15.458 - 15.557: 92.4346% ( 6) 00:09:35.473 15.557 - 15.655: 92.5616% ( 5) 00:09:35.473 15.655 - 15.754: 92.8916% ( 13) 00:09:35.473 15.754 - 15.852: 93.1455% ( 10) 00:09:35.473 15.852 - 15.951: 93.4755% ( 13) 00:09:35.473 15.951 - 16.049: 93.9325% ( 18) 00:09:35.473 16.049 - 16.148: 94.2117% ( 11) 00:09:35.473 16.148 - 16.246: 94.3894% ( 7) 00:09:35.473 16.246 - 16.345: 94.7195% ( 13) 00:09:35.473 16.345 - 16.443: 94.9733% ( 10) 00:09:35.473 16.443 - 16.542: 95.3288% ( 14) 00:09:35.473 16.542 - 16.640: 95.4557% ( 5) 00:09:35.473 16.640 - 16.738: 95.6080% ( 6) 00:09:35.473 16.738 - 16.837: 95.9127% ( 12) 00:09:35.473 16.837 - 16.935: 96.0904% ( 7) 00:09:35.473 16.935 - 17.034: 96.1919% ( 4) 00:09:35.473 17.034 - 17.132: 96.3950% ( 8) 00:09:35.473 17.132 - 17.231: 96.4966% ( 4) 00:09:35.473 17.231 - 17.329: 96.6235% ( 5) 00:09:35.473 17.329 - 17.428: 96.6997% ( 3) 00:09:35.473 17.428 - 17.526: 96.9028% ( 8) 00:09:35.473 17.526 - 17.625: 97.1566% ( 10) 00:09:35.473 17.625 - 17.723: 97.3343% ( 7) 00:09:35.473 17.723 - 17.822: 97.6136% ( 11) 00:09:35.473 17.822 - 17.920: 97.7405% ( 5) 00:09:35.473 17.920 - 18.018: 97.9436% ( 8) 00:09:35.473 18.018 - 18.117: 98.0706% ( 5) 00:09:35.473 18.117 - 18.215: 98.1467% ( 3) 00:09:35.473 18.215 - 18.314: 98.2229% ( 3) 00:09:35.473 18.314 - 18.412: 98.2737% ( 2) 00:09:35.473 18.412 - 18.511: 98.3498% ( 3) 00:09:35.473 18.511 - 18.609: 98.4768% ( 5) 00:09:35.473 18.609 - 18.708: 98.5275% ( 2) 00:09:35.473 18.708 - 18.806: 98.6037% ( 3) 00:09:35.473 18.905 - 19.003: 98.6291% ( 1) 00:09:35.473 19.003 - 19.102: 98.7053% ( 3) 00:09:35.473 19.102 - 19.200: 98.7560% ( 2) 00:09:35.473 19.200 - 19.298: 98.8576% ( 4) 00:09:35.473 19.298 - 19.397: 98.8830% ( 1) 00:09:35.473 19.495 - 19.594: 98.9337% ( 2) 00:09:35.473 19.594 - 19.692: 98.9591% ( 1) 00:09:35.473 19.692 - 19.791: 98.9845% ( 1) 00:09:35.473 19.791 - 19.889: 99.0099% ( 1) 00:09:35.473 19.889 - 19.988: 99.0861% ( 3) 00:09:35.473 19.988 - 20.086: 99.1368% ( 2) 00:09:35.473 20.086 - 20.185: 99.1622% ( 1) 00:09:35.473 20.185 - 20.283: 99.1876% ( 1) 00:09:35.473 20.283 - 20.382: 99.2384% ( 2) 00:09:35.473 20.382 - 20.480: 99.2892% ( 2) 00:09:35.473 20.480 - 20.578: 99.3145% ( 1) 00:09:35.473 20.578 - 20.677: 99.3399% ( 1) 00:09:35.473 20.775 - 20.874: 99.3653% ( 1) 00:09:35.473 20.874 - 20.972: 99.3907% ( 1) 00:09:35.473 20.972 - 21.071: 99.4161% ( 1) 00:09:35.473 21.169 - 21.268: 99.4415% ( 1) 00:09:35.473 21.662 - 21.760: 99.4669% ( 1) 00:09:35.473 22.252 - 22.351: 99.4923% ( 1) 00:09:35.473 22.548 - 22.646: 99.5176% ( 1) 00:09:35.473 22.745 - 22.843: 99.5430% ( 1) 00:09:35.473 23.138 - 23.237: 99.5684% ( 1) 00:09:35.473 23.729 - 23.828: 99.5938% ( 1) 00:09:35.473 24.615 - 24.714: 99.6192% ( 1) 00:09:35.473 24.812 - 24.911: 99.6446% ( 1) 00:09:35.473 25.600 - 25.797: 99.6700% ( 1) 00:09:35.473 25.994 - 26.191: 99.6954% ( 1) 00:09:35.473 30.326 - 30.523: 99.7207% ( 1) 00:09:35.473 34.855 - 35.052: 99.7461% ( 1) 00:09:35.473 43.126 - 43.323: 99.7715% ( 1) 00:09:35.473 45.095 - 45.292: 99.7969% ( 1) 00:09:35.473 46.671 - 46.868: 99.8223% ( 1) 00:09:35.473 48.246 - 48.443: 99.8477% ( 1) 00:09:35.473 58.289 - 58.683: 99.8731% ( 1) 00:09:35.473 69.711 - 70.105: 99.8985% ( 1) 00:09:35.473 73.649 - 74.043: 99.9238% ( 1) 00:09:35.473 75.618 - 76.012: 99.9492% ( 1) 00:09:35.473 91.372 - 91.766: 99.9746% ( 1) 00:09:35.473 133.908 - 134.695: 100.0000% ( 1) 00:09:35.473 00:09:35.473 Complete histogram 00:09:35.473 ================== 00:09:35.473 Range in us Cumulative Count 00:09:35.473 7.237 - 7.286: 0.0508% ( 2) 00:09:35.473 7.286 - 7.335: 0.5331% ( 19) 00:09:35.473 7.335 - 7.385: 2.6657% ( 84) 00:09:35.473 7.385 - 7.434: 8.6824% ( 237) 00:09:35.473 7.434 - 7.483: 18.4565% ( 385) 00:09:35.473 7.483 - 7.532: 32.1909% ( 541) 00:09:35.473 7.532 - 7.582: 45.1130% ( 509) 00:09:35.473 7.582 - 7.631: 56.0548% ( 431) 00:09:35.473 7.631 - 7.680: 63.9502% ( 311) 00:09:35.473 7.680 - 7.729: 69.3831% ( 214) 00:09:35.473 7.729 - 7.778: 72.4549% ( 121) 00:09:35.473 7.778 - 7.828: 74.3082% ( 73) 00:09:35.473 7.828 - 7.877: 75.8060% ( 59) 00:09:35.473 7.877 - 7.926: 77.2277% ( 56) 00:09:35.473 7.926 - 7.975: 78.6240% ( 55) 00:09:35.473 7.975 - 8.025: 80.5788% ( 77) 00:09:35.473 8.025 - 8.074: 81.9751% ( 55) 00:09:35.473 8.074 - 8.123: 83.5745% ( 63) 00:09:35.473 8.123 - 8.172: 85.0724% ( 59) 00:09:35.473 8.172 - 8.222: 86.6717% ( 63) 00:09:35.473 8.222 - 8.271: 87.9665% ( 51) 00:09:35.473 8.271 - 8.320: 89.2612% ( 51) 00:09:35.473 8.320 - 8.369: 90.0228% ( 30) 00:09:35.473 8.369 - 8.418: 90.8352% ( 32) 00:09:35.473 8.418 - 8.468: 91.4699% ( 25) 00:09:35.473 8.468 - 8.517: 91.8253% ( 14) 00:09:35.473 8.517 - 8.566: 92.4600% ( 25) 00:09:35.473 8.566 - 8.615: 92.7139% ( 10) 00:09:35.473 8.615 - 8.665: 93.0693% ( 14) 00:09:35.473 8.665 - 8.714: 93.2724% ( 8) 00:09:35.473 8.714 - 8.763: 93.3993% ( 5) 00:09:35.473 8.763 - 8.812: 93.5009% ( 4) 00:09:35.473 8.812 - 8.862: 93.6532% ( 6) 00:09:35.473 8.862 - 8.911: 93.8309% ( 7) 00:09:35.473 8.911 - 8.960: 93.9071% ( 3) 00:09:35.473 8.960 - 9.009: 93.9832% ( 3) 00:09:35.473 9.009 - 9.058: 94.1102% ( 5) 00:09:35.473 9.058 - 9.108: 94.1610% ( 2) 00:09:35.473 9.108 - 9.157: 94.1863% ( 1) 00:09:35.473 9.157 - 9.206: 94.2625% ( 3) 00:09:35.473 9.206 - 9.255: 94.2879% ( 1) 00:09:35.473 9.305 - 9.354: 94.3387% ( 2) 00:09:35.473 9.354 - 9.403: 94.3641% ( 1) 00:09:35.473 9.403 - 9.452: 94.4910% ( 5) 00:09:35.473 9.452 - 9.502: 94.5925% ( 4) 00:09:35.473 9.502 - 9.551: 94.6687% ( 3) 00:09:35.473 9.551 - 9.600: 94.8464% ( 7) 00:09:35.473 9.600 - 9.649: 94.9987% ( 6) 00:09:35.473 9.649 - 9.698: 95.0241% ( 1) 00:09:35.473 9.698 - 9.748: 95.0749% ( 2) 00:09:35.473 9.748 - 9.797: 95.1511% ( 3) 00:09:35.473 9.797 - 9.846: 95.2272% ( 3) 00:09:35.473 9.846 - 9.895: 95.2526% ( 1) 00:09:35.473 9.895 - 9.945: 95.3542% ( 4) 00:09:35.473 9.945 - 9.994: 95.4049% ( 2) 00:09:35.473 9.994 - 10.043: 95.4557% ( 2) 00:09:35.473 10.043 - 10.092: 95.4811% ( 1) 00:09:35.473 10.092 - 10.142: 95.5065% ( 1) 00:09:35.473 10.142 - 10.191: 95.5826% ( 3) 00:09:35.473 10.191 - 10.240: 95.6080% ( 1) 00:09:35.473 10.240 - 10.289: 95.6588% ( 2) 00:09:35.473 10.289 - 10.338: 95.6842% ( 1) 00:09:35.473 10.338 - 10.388: 95.7603% ( 3) 00:09:35.473 10.388 - 10.437: 95.8619% ( 4) 00:09:35.473 10.437 - 10.486: 95.9127% ( 2) 00:09:35.473 10.535 - 10.585: 95.9888% ( 3) 00:09:35.473 10.585 - 10.634: 96.0904% ( 4) 00:09:35.473 10.634 - 10.683: 96.1665% ( 3) 00:09:35.473 10.683 - 10.732: 96.1919% ( 1) 00:09:35.473 10.732 - 10.782: 96.2427% ( 2) 00:09:35.473 10.782 - 10.831: 96.4204% ( 7) 00:09:35.473 10.880 - 10.929: 96.5473% ( 5) 00:09:35.473 10.929 - 10.978: 96.5727% ( 1) 00:09:35.473 11.028 - 11.077: 96.6235% ( 2) 00:09:35.473 11.077 - 11.126: 96.7251% ( 4) 00:09:35.473 11.126 - 11.175: 96.7504% ( 1) 00:09:35.473 11.175 - 11.225: 96.8012% ( 2) 00:09:35.473 11.225 - 11.274: 96.8266% ( 1) 00:09:35.473 11.274 - 11.323: 96.8520% ( 1) 00:09:35.473 11.323 - 11.372: 96.8774% ( 1) 00:09:35.473 11.520 - 11.569: 96.9282% ( 2) 00:09:35.473 11.668 - 11.717: 96.9535% ( 1) 00:09:35.473 11.717 - 11.766: 96.9789% ( 1) 00:09:35.473 11.815 - 11.865: 97.0043% ( 1) 00:09:35.474 11.914 - 11.963: 97.0551% ( 2) 00:09:35.474 12.160 - 12.209: 97.0805% ( 1) 00:09:35.474 12.357 - 12.406: 97.1059% ( 1) 00:09:35.474 12.406 - 12.455: 97.1313% ( 1) 00:09:35.474 12.702 - 12.800: 97.1820% ( 2) 00:09:35.474 12.898 - 12.997: 97.2328% ( 2) 00:09:35.474 12.997 - 13.095: 97.2836% ( 2) 00:09:35.474 13.095 - 13.194: 97.3090% ( 1) 00:09:35.474 13.194 - 13.292: 97.3597% ( 2) 00:09:35.474 13.292 - 13.391: 97.4105% ( 2) 00:09:35.474 13.391 - 13.489: 97.4359% ( 1) 00:09:35.474 13.489 - 13.588: 97.5628% ( 5) 00:09:35.474 13.588 - 13.686: 97.6136% ( 2) 00:09:35.474 13.686 - 13.785: 97.6898% ( 3) 00:09:35.474 13.785 - 13.883: 97.7405% ( 2) 00:09:35.474 13.883 - 13.982: 97.7913% ( 2) 00:09:35.474 13.982 - 14.080: 97.8421% ( 2) 00:09:35.474 14.080 - 14.178: 97.9436% ( 4) 00:09:35.474 14.178 - 14.277: 98.1214% ( 7) 00:09:35.474 14.277 - 14.375: 98.2483% ( 5) 00:09:35.474 14.375 - 14.474: 98.2991% ( 2) 00:09:35.474 14.474 - 14.572: 98.3498% ( 2) 00:09:35.474 14.572 - 14.671: 98.3752% ( 1) 00:09:35.474 14.671 - 14.769: 98.4006% ( 1) 00:09:35.474 14.769 - 14.868: 98.4514% ( 2) 00:09:35.474 14.868 - 14.966: 98.5022% ( 2) 00:09:35.474 14.966 - 15.065: 98.5275% ( 1) 00:09:35.474 15.065 - 15.163: 98.5529% ( 1) 00:09:35.474 15.163 - 15.262: 98.5783% ( 1) 00:09:35.474 15.262 - 15.360: 98.6291% ( 2) 00:09:35.474 15.360 - 15.458: 98.6545% ( 1) 00:09:35.474 15.655 - 15.754: 98.6799% ( 1) 00:09:35.474 15.754 - 15.852: 98.7306% ( 2) 00:09:35.474 15.951 - 16.049: 98.7814% ( 2) 00:09:35.474 16.345 - 16.443: 98.8068% ( 1) 00:09:35.474 16.443 - 16.542: 98.8322% ( 1) 00:09:35.474 17.231 - 17.329: 98.8576% ( 1) 00:09:35.474 17.526 - 17.625: 98.8830% ( 1) 00:09:35.474 18.018 - 18.117: 98.9084% ( 1) 00:09:35.474 18.117 - 18.215: 98.9337% ( 1) 00:09:35.474 18.412 - 18.511: 98.9591% ( 1) 00:09:35.474 18.708 - 18.806: 98.9845% ( 1) 00:09:35.474 18.806 - 18.905: 99.0099% ( 1) 00:09:35.474 19.200 - 19.298: 99.0607% ( 2) 00:09:35.474 19.397 - 19.495: 99.0861% ( 1) 00:09:35.474 19.692 - 19.791: 99.1114% ( 1) 00:09:35.474 19.988 - 20.086: 99.1368% ( 1) 00:09:35.474 20.086 - 20.185: 99.2384% ( 4) 00:09:35.474 20.185 - 20.283: 99.3145% ( 3) 00:09:35.474 20.382 - 20.480: 99.3653% ( 2) 00:09:35.474 20.480 - 20.578: 99.4161% ( 2) 00:09:35.474 20.578 - 20.677: 99.4415% ( 1) 00:09:35.474 20.677 - 20.775: 99.4669% ( 1) 00:09:35.474 20.775 - 20.874: 99.4923% ( 1) 00:09:35.474 20.972 - 21.071: 99.5176% ( 1) 00:09:35.474 21.563 - 21.662: 99.5430% ( 1) 00:09:35.474 21.662 - 21.760: 99.5684% ( 1) 00:09:35.474 21.858 - 21.957: 99.5938% ( 1) 00:09:35.474 22.154 - 22.252: 99.6446% ( 2) 00:09:35.474 22.449 - 22.548: 99.6700% ( 1) 00:09:35.474 22.548 - 22.646: 99.6954% ( 1) 00:09:35.474 25.994 - 26.191: 99.7207% ( 1) 00:09:35.474 26.782 - 26.978: 99.7461% ( 1) 00:09:35.474 27.963 - 28.160: 99.7715% ( 1) 00:09:35.474 29.932 - 30.129: 99.7969% ( 1) 00:09:35.474 33.477 - 33.674: 99.8223% ( 1) 00:09:35.474 34.068 - 34.265: 99.8477% ( 1) 00:09:35.474 35.840 - 36.037: 99.8731% ( 1) 00:09:35.474 37.809 - 38.006: 99.8985% ( 1) 00:09:35.474 38.006 - 38.203: 99.9238% ( 1) 00:09:35.474 49.034 - 49.231: 99.9492% ( 1) 00:09:35.474 49.625 - 49.822: 99.9746% ( 1) 00:09:35.474 77.982 - 78.375: 100.0000% ( 1) 00:09:35.474 00:09:35.474 00:09:35.474 real 0m1.236s 00:09:35.474 user 0m1.071s 00:09:35.474 sys 0m0.113s 00:09:35.474 19:28:02 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.474 ************************************ 00:09:35.474 END TEST nvme_overhead 00:09:35.474 ************************************ 00:09:35.474 19:28:02 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:35.474 19:28:02 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:35.474 19:28:02 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:35.474 19:28:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.474 19:28:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.474 ************************************ 00:09:35.474 START TEST nvme_arbitration 00:09:35.474 ************************************ 00:09:35.474 19:28:02 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:38.793 Initializing NVMe Controllers 00:09:38.793 Attached to 0000:00:13.0 00:09:38.793 Attached to 0000:00:10.0 00:09:38.793 Attached to 0000:00:11.0 00:09:38.793 Attached to 0000:00:12.0 00:09:38.793 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:09:38.793 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:09:38.793 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:09:38.793 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:38.793 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:38.793 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:38.793 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:38.793 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:38.793 Initialization complete. Launching workers. 00:09:38.793 Starting thread on core 1 with urgent priority queue 00:09:38.793 Starting thread on core 2 with urgent priority queue 00:09:38.793 Starting thread on core 3 with urgent priority queue 00:09:38.793 Starting thread on core 0 with urgent priority queue 00:09:38.793 QEMU NVMe Ctrl (12343 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:09:38.793 QEMU NVMe Ctrl (12342 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:09:38.793 QEMU NVMe Ctrl (12340 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:09:38.793 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:09:38.793 QEMU NVMe Ctrl (12341 ) core 2: 896.00 IO/s 111.61 secs/100000 ios 00:09:38.793 QEMU NVMe Ctrl (12342 ) core 3: 832.00 IO/s 120.19 secs/100000 ios 00:09:38.793 ======================================================== 00:09:38.793 00:09:38.793 ************************************ 00:09:38.793 END TEST nvme_arbitration 00:09:38.793 ************************************ 00:09:38.793 00:09:38.793 real 0m3.313s 00:09:38.793 user 0m9.213s 00:09:38.793 sys 0m0.126s 00:09:38.793 19:28:05 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.793 19:28:05 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:38.793 19:28:05 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:38.793 19:28:05 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:38.793 19:28:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.793 19:28:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.793 ************************************ 00:09:38.793 START TEST nvme_single_aen 00:09:38.793 ************************************ 00:09:38.793 19:28:05 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:39.053 Asynchronous Event Request test 00:09:39.053 Attached to 0000:00:13.0 00:09:39.053 Attached to 0000:00:10.0 00:09:39.053 Attached to 0000:00:11.0 00:09:39.053 Attached to 0000:00:12.0 00:09:39.053 Reset controller to setup AER completions for this process 00:09:39.053 Registering asynchronous event callbacks... 00:09:39.053 Getting orig temperature thresholds of all controllers 00:09:39.053 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:39.053 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:39.053 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:39.053 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:39.053 Setting all controllers temperature threshold low to trigger AER 00:09:39.053 Waiting for all controllers temperature threshold to be set lower 00:09:39.054 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:39.054 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:39.054 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:39.054 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:39.054 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:39.054 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:39.054 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:39.054 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:39.054 Waiting for all controllers to trigger AER and reset threshold 00:09:39.054 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.054 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.054 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.054 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.054 Cleaning up... 00:09:39.054 00:09:39.054 real 0m0.225s 00:09:39.054 user 0m0.078s 00:09:39.054 sys 0m0.098s 00:09:39.054 19:28:06 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.054 19:28:06 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:39.054 ************************************ 00:09:39.054 END TEST nvme_single_aen 00:09:39.054 ************************************ 00:09:39.054 19:28:06 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:39.054 19:28:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.054 19:28:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.054 19:28:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:39.054 ************************************ 00:09:39.054 START TEST nvme_doorbell_aers 00:09:39.054 ************************************ 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:39.054 19:28:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:39.314 [2024-12-05 19:28:06.401935] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:09:49.313 Executing: test_write_invalid_db 00:09:49.313 Waiting for AER completion... 00:09:49.313 Failure: test_write_invalid_db 00:09:49.313 00:09:49.313 Executing: test_invalid_db_write_overflow_sq 00:09:49.313 Waiting for AER completion... 00:09:49.313 Failure: test_invalid_db_write_overflow_sq 00:09:49.313 00:09:49.313 Executing: test_invalid_db_write_overflow_cq 00:09:49.313 Waiting for AER completion... 00:09:49.313 Failure: test_invalid_db_write_overflow_cq 00:09:49.313 00:09:49.313 19:28:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:49.313 19:28:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:49.313 [2024-12-05 19:28:16.451030] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:09:59.282 Executing: test_write_invalid_db 00:09:59.282 Waiting for AER completion... 00:09:59.282 Failure: test_write_invalid_db 00:09:59.282 00:09:59.282 Executing: test_invalid_db_write_overflow_sq 00:09:59.282 Waiting for AER completion... 00:09:59.282 Failure: test_invalid_db_write_overflow_sq 00:09:59.282 00:09:59.282 Executing: test_invalid_db_write_overflow_cq 00:09:59.282 Waiting for AER completion... 00:09:59.282 Failure: test_invalid_db_write_overflow_cq 00:09:59.282 00:09:59.282 19:28:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:59.282 19:28:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:59.282 [2024-12-05 19:28:26.457866] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:09.239 Executing: test_write_invalid_db 00:10:09.239 Waiting for AER completion... 00:10:09.239 Failure: test_write_invalid_db 00:10:09.239 00:10:09.239 Executing: test_invalid_db_write_overflow_sq 00:10:09.239 Waiting for AER completion... 00:10:09.239 Failure: test_invalid_db_write_overflow_sq 00:10:09.239 00:10:09.239 Executing: test_invalid_db_write_overflow_cq 00:10:09.239 Waiting for AER completion... 00:10:09.239 Failure: test_invalid_db_write_overflow_cq 00:10:09.239 00:10:09.239 19:28:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:09.239 19:28:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:09.497 [2024-12-05 19:28:36.508579] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 Executing: test_write_invalid_db 00:10:19.538 Waiting for AER completion... 00:10:19.538 Failure: test_write_invalid_db 00:10:19.538 00:10:19.538 Executing: test_invalid_db_write_overflow_sq 00:10:19.538 Waiting for AER completion... 00:10:19.538 Failure: test_invalid_db_write_overflow_sq 00:10:19.538 00:10:19.538 Executing: test_invalid_db_write_overflow_cq 00:10:19.538 Waiting for AER completion... 00:10:19.538 Failure: test_invalid_db_write_overflow_cq 00:10:19.538 00:10:19.538 00:10:19.538 real 0m40.192s 00:10:19.538 user 0m34.063s 00:10:19.538 sys 0m5.728s 00:10:19.538 19:28:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.538 ************************************ 00:10:19.538 END TEST nvme_doorbell_aers 00:10:19.538 ************************************ 00:10:19.538 19:28:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:19.538 19:28:46 nvme -- nvme/nvme.sh@97 -- # uname 00:10:19.538 19:28:46 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:19.538 19:28:46 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:19.538 19:28:46 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:19.538 19:28:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.538 19:28:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.538 ************************************ 00:10:19.538 START TEST nvme_multi_aen 00:10:19.538 ************************************ 00:10:19.538 19:28:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:19.538 [2024-12-05 19:28:46.552037] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 [2024-12-05 19:28:46.552104] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 [2024-12-05 19:28:46.552116] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 [2024-12-05 19:28:46.553757] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 [2024-12-05 19:28:46.553802] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 [2024-12-05 19:28:46.553814] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 [2024-12-05 19:28:46.554939] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 [2024-12-05 19:28:46.554970] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 [2024-12-05 19:28:46.554981] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 [2024-12-05 19:28:46.556135] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 [2024-12-05 19:28:46.556165] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 [2024-12-05 19:28:46.556174] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63616) is not found. Dropping the request. 00:10:19.538 Child process pid: 64135 00:10:19.538 [Child] Asynchronous Event Request test 00:10:19.538 [Child] Attached to 0000:00:13.0 00:10:19.538 [Child] Attached to 0000:00:10.0 00:10:19.538 [Child] Attached to 0000:00:11.0 00:10:19.538 [Child] Attached to 0000:00:12.0 00:10:19.538 [Child] Registering asynchronous event callbacks... 00:10:19.538 [Child] Getting orig temperature thresholds of all controllers 00:10:19.538 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.538 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.538 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.538 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.538 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:19.538 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.538 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.538 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.538 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.538 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.538 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.538 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.538 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.538 [Child] Cleaning up... 00:10:19.797 Asynchronous Event Request test 00:10:19.797 Attached to 0000:00:13.0 00:10:19.797 Attached to 0000:00:10.0 00:10:19.797 Attached to 0000:00:11.0 00:10:19.797 Attached to 0000:00:12.0 00:10:19.797 Reset controller to setup AER completions for this process 00:10:19.797 Registering asynchronous event callbacks... 00:10:19.797 Getting orig temperature thresholds of all controllers 00:10:19.797 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.797 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.797 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.797 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.797 Setting all controllers temperature threshold low to trigger AER 00:10:19.797 Waiting for all controllers temperature threshold to be set lower 00:10:19.797 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.797 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:19.797 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.797 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:19.797 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.797 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:19.797 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.797 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:19.797 Waiting for all controllers to trigger AER and reset threshold 00:10:19.797 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.797 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.797 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.797 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.797 Cleaning up... 00:10:19.797 00:10:19.797 real 0m0.451s 00:10:19.797 user 0m0.156s 00:10:19.797 sys 0m0.176s 00:10:19.797 19:28:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.797 ************************************ 00:10:19.797 19:28:46 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:19.797 END TEST nvme_multi_aen 00:10:19.797 ************************************ 00:10:19.797 19:28:46 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:19.797 19:28:46 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:19.797 19:28:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.797 19:28:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.797 ************************************ 00:10:19.797 START TEST nvme_startup 00:10:19.797 ************************************ 00:10:19.797 19:28:46 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:20.055 Initializing NVMe Controllers 00:10:20.055 Attached to 0000:00:13.0 00:10:20.055 Attached to 0000:00:10.0 00:10:20.055 Attached to 0000:00:11.0 00:10:20.055 Attached to 0000:00:12.0 00:10:20.055 Initialization complete. 00:10:20.055 Time used:164628.438 (us). 00:10:20.055 00:10:20.055 real 0m0.227s 00:10:20.055 user 0m0.074s 00:10:20.055 sys 0m0.103s 00:10:20.055 19:28:47 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.055 ************************************ 00:10:20.055 END TEST nvme_startup 00:10:20.055 ************************************ 00:10:20.055 19:28:47 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:20.055 19:28:47 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:20.055 19:28:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.055 19:28:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.055 19:28:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:20.055 ************************************ 00:10:20.055 START TEST nvme_multi_secondary 00:10:20.055 ************************************ 00:10:20.055 19:28:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:10:20.055 19:28:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64191 00:10:20.055 19:28:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:20.055 19:28:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64192 00:10:20.055 19:28:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:20.055 19:28:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:23.327 Initializing NVMe Controllers 00:10:23.327 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:23.327 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:23.327 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:23.327 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:23.327 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:23.327 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:23.327 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:23.327 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:23.327 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:23.327 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:23.327 Initialization complete. Launching workers. 00:10:23.327 ======================================================== 00:10:23.327 Latency(us) 00:10:23.327 Device Information : IOPS MiB/s Average min max 00:10:23.327 PCIE (0000:00:13.0) NSID 1 from core 1: 7104.14 27.75 2251.77 893.29 7290.46 00:10:23.328 PCIE (0000:00:10.0) NSID 1 from core 1: 7098.81 27.73 2252.38 757.03 7161.55 00:10:23.328 PCIE (0000:00:11.0) NSID 1 from core 1: 7104.14 27.75 2251.75 760.97 7030.46 00:10:23.328 PCIE (0000:00:12.0) NSID 1 from core 1: 7098.81 27.73 2253.50 759.26 7719.14 00:10:23.328 PCIE (0000:00:12.0) NSID 2 from core 1: 7098.81 27.73 2253.57 756.75 7403.23 00:10:23.328 PCIE (0000:00:12.0) NSID 3 from core 1: 7098.81 27.73 2253.68 888.80 7112.07 00:10:23.328 ======================================================== 00:10:23.328 Total : 42603.52 166.42 2252.78 756.75 7719.14 00:10:23.328 00:10:23.328 Initializing NVMe Controllers 00:10:23.328 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:23.328 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:23.328 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:23.328 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:23.328 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:23.328 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:23.328 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:23.328 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:23.328 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:23.328 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:23.328 Initialization complete. Launching workers. 00:10:23.328 ======================================================== 00:10:23.328 Latency(us) 00:10:23.328 Device Information : IOPS MiB/s Average min max 00:10:23.328 PCIE (0000:00:13.0) NSID 1 from core 2: 2993.04 11.69 5345.33 983.07 18892.77 00:10:23.328 PCIE (0000:00:10.0) NSID 1 from core 2: 2987.71 11.67 5352.95 992.92 20462.45 00:10:23.328 PCIE (0000:00:11.0) NSID 1 from core 2: 2993.04 11.69 5344.80 981.67 20444.82 00:10:23.328 PCIE (0000:00:12.0) NSID 1 from core 2: 2993.04 11.69 5345.01 915.70 20477.09 00:10:23.328 PCIE (0000:00:12.0) NSID 2 from core 2: 2993.04 11.69 5344.27 1113.58 15208.88 00:10:23.328 PCIE (0000:00:12.0) NSID 3 from core 2: 2993.04 11.69 5345.59 1031.21 18521.82 00:10:23.328 ======================================================== 00:10:23.328 Total : 17952.90 70.13 5346.32 915.70 20477.09 00:10:23.328 00:10:23.584 19:28:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64191 00:10:25.491 Initializing NVMe Controllers 00:10:25.491 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:25.491 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:25.491 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:25.491 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:25.491 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:25.491 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:25.491 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:25.491 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:25.491 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:25.491 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:25.491 Initialization complete. Launching workers. 00:10:25.491 ======================================================== 00:10:25.491 Latency(us) 00:10:25.491 Device Information : IOPS MiB/s Average min max 00:10:25.491 PCIE (0000:00:13.0) NSID 1 from core 0: 8885.70 34.71 1800.24 680.62 10209.41 00:10:25.491 PCIE (0000:00:10.0) NSID 1 from core 0: 8879.30 34.68 1800.55 681.76 10261.91 00:10:25.491 PCIE (0000:00:11.0) NSID 1 from core 0: 8879.30 34.68 1801.48 694.69 10267.48 00:10:25.491 PCIE (0000:00:12.0) NSID 1 from core 0: 8879.30 34.68 1801.44 644.99 10423.49 00:10:25.491 PCIE (0000:00:12.0) NSID 2 from core 0: 8882.50 34.70 1800.77 624.08 10333.17 00:10:25.491 PCIE (0000:00:12.0) NSID 3 from core 0: 8879.30 34.68 1801.39 593.56 10406.73 00:10:25.491 ======================================================== 00:10:25.491 Total : 53285.39 208.15 1800.98 593.56 10423.49 00:10:25.491 00:10:25.491 19:28:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64192 00:10:25.491 19:28:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64261 00:10:25.491 19:28:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64262 00:10:25.491 19:28:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:25.491 19:28:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:25.491 19:28:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:28.856 Initializing NVMe Controllers 00:10:28.856 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:28.856 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:28.856 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:28.856 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:28.856 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:28.856 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:28.856 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:28.856 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:28.856 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:28.857 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:28.857 Initialization complete. Launching workers. 00:10:28.857 ======================================================== 00:10:28.857 Latency(us) 00:10:28.857 Device Information : IOPS MiB/s Average min max 00:10:28.857 PCIE (0000:00:13.0) NSID 1 from core 0: 4287.77 16.75 3731.02 769.15 12777.59 00:10:28.857 PCIE (0000:00:10.0) NSID 1 from core 0: 4287.77 16.75 3729.90 753.44 12302.48 00:10:28.857 PCIE (0000:00:11.0) NSID 1 from core 0: 4287.77 16.75 3730.98 773.48 11858.08 00:10:28.857 PCIE (0000:00:12.0) NSID 1 from core 0: 4287.77 16.75 3731.01 771.97 12735.73 00:10:28.857 PCIE (0000:00:12.0) NSID 2 from core 0: 4287.77 16.75 3730.96 776.74 13466.88 00:10:28.857 PCIE (0000:00:12.0) NSID 3 from core 0: 4293.10 16.77 3726.29 757.31 13296.01 00:10:28.857 ======================================================== 00:10:28.857 Total : 25731.94 100.52 3730.02 753.44 13466.88 00:10:28.857 00:10:28.857 Initializing NVMe Controllers 00:10:28.857 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:28.857 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:28.857 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:28.857 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:28.857 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:28.857 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:28.857 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:28.857 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:28.857 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:28.857 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:28.857 Initialization complete. Launching workers. 00:10:28.857 ======================================================== 00:10:28.857 Latency(us) 00:10:28.857 Device Information : IOPS MiB/s Average min max 00:10:28.857 PCIE (0000:00:13.0) NSID 1 from core 1: 4355.59 17.01 3672.85 721.97 12969.90 00:10:28.857 PCIE (0000:00:10.0) NSID 1 from core 1: 4355.59 17.01 3671.58 707.41 12380.95 00:10:28.857 PCIE (0000:00:11.0) NSID 1 from core 1: 4355.59 17.01 3672.79 721.94 13073.43 00:10:28.857 PCIE (0000:00:12.0) NSID 1 from core 1: 4355.59 17.01 3672.73 724.65 11681.73 00:10:28.857 PCIE (0000:00:12.0) NSID 2 from core 1: 4355.59 17.01 3672.65 738.47 12109.89 00:10:28.857 PCIE (0000:00:12.0) NSID 3 from core 1: 4360.92 17.03 3668.20 729.95 12934.19 00:10:28.857 ======================================================== 00:10:28.857 Total : 26138.85 102.10 3671.80 707.41 13073.43 00:10:28.857 00:10:30.761 Initializing NVMe Controllers 00:10:30.761 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:30.761 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:30.761 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:30.761 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:30.761 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:30.761 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:30.761 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:30.761 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:30.761 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:30.761 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:30.761 Initialization complete. Launching workers. 00:10:30.761 ======================================================== 00:10:30.761 Latency(us) 00:10:30.761 Device Information : IOPS MiB/s Average min max 00:10:30.761 PCIE (0000:00:13.0) NSID 1 from core 2: 2235.34 8.73 7156.98 759.41 29973.67 00:10:30.761 PCIE (0000:00:10.0) NSID 1 from core 2: 2235.34 8.73 7152.37 739.81 29462.91 00:10:30.761 PCIE (0000:00:11.0) NSID 1 from core 2: 2235.34 8.73 7152.13 728.25 32299.49 00:10:30.761 PCIE (0000:00:12.0) NSID 1 from core 2: 2235.34 8.73 7152.00 762.23 33596.91 00:10:30.761 PCIE (0000:00:12.0) NSID 2 from core 2: 2235.34 8.73 7151.87 768.85 37543.97 00:10:30.761 PCIE (0000:00:12.0) NSID 3 from core 2: 2235.34 8.73 7152.11 766.44 32557.58 00:10:30.761 ======================================================== 00:10:30.761 Total : 13412.04 52.39 7152.91 728.25 37543.97 00:10:30.761 00:10:30.761 19:28:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64261 00:10:30.761 19:28:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64262 00:10:30.761 00:10:30.761 real 0m10.632s 00:10:30.761 user 0m18.380s 00:10:30.761 sys 0m0.666s 00:10:30.761 19:28:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.761 19:28:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:30.761 ************************************ 00:10:30.761 END TEST nvme_multi_secondary 00:10:30.761 ************************************ 00:10:30.761 19:28:57 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:30.761 19:28:57 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:30.761 19:28:57 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/63213 ]] 00:10:30.761 19:28:57 nvme -- common/autotest_common.sh@1094 -- # kill 63213 00:10:30.761 19:28:57 nvme -- common/autotest_common.sh@1095 -- # wait 63213 00:10:30.761 [2024-12-05 19:28:57.807612] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.761 [2024-12-05 19:28:57.807716] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.761 [2024-12-05 19:28:57.807749] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.761 [2024-12-05 19:28:57.807769] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.761 [2024-12-05 19:28:57.811334] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.761 [2024-12-05 19:28:57.811418] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.762 [2024-12-05 19:28:57.811440] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.762 [2024-12-05 19:28:57.811460] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.762 [2024-12-05 19:28:57.814316] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.762 [2024-12-05 19:28:57.814375] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.762 [2024-12-05 19:28:57.814394] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.762 [2024-12-05 19:28:57.814413] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.762 [2024-12-05 19:28:57.816911] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.762 [2024-12-05 19:28:57.816947] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.762 [2024-12-05 19:28:57.816958] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.762 [2024-12-05 19:28:57.816969] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64134) is not found. Dropping the request. 00:10:30.762 19:28:57 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:10:30.762 19:28:57 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:10:30.762 19:28:57 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:30.762 19:28:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:30.762 19:28:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.762 19:28:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:30.762 ************************************ 00:10:30.762 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:30.762 ************************************ 00:10:30.762 19:28:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:31.028 * Looking for test storage... 00:10:31.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:31.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.028 --rc genhtml_branch_coverage=1 00:10:31.028 --rc genhtml_function_coverage=1 00:10:31.028 --rc genhtml_legend=1 00:10:31.028 --rc geninfo_all_blocks=1 00:10:31.028 --rc geninfo_unexecuted_blocks=1 00:10:31.028 00:10:31.028 ' 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:31.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.028 --rc genhtml_branch_coverage=1 00:10:31.028 --rc genhtml_function_coverage=1 00:10:31.028 --rc genhtml_legend=1 00:10:31.028 --rc geninfo_all_blocks=1 00:10:31.028 --rc geninfo_unexecuted_blocks=1 00:10:31.028 00:10:31.028 ' 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:31.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.028 --rc genhtml_branch_coverage=1 00:10:31.028 --rc genhtml_function_coverage=1 00:10:31.028 --rc genhtml_legend=1 00:10:31.028 --rc geninfo_all_blocks=1 00:10:31.028 --rc geninfo_unexecuted_blocks=1 00:10:31.028 00:10:31.028 ' 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:31.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.028 --rc genhtml_branch_coverage=1 00:10:31.028 --rc genhtml_function_coverage=1 00:10:31.028 --rc genhtml_legend=1 00:10:31.028 --rc geninfo_all_blocks=1 00:10:31.028 --rc geninfo_unexecuted_blocks=1 00:10:31.028 00:10:31.028 ' 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:31.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64430 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64430 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64430 ']' 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.028 19:28:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:31.028 [2024-12-05 19:28:58.242460] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:10:31.028 [2024-12-05 19:28:58.242584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64430 ] 00:10:31.288 [2024-12-05 19:28:58.405445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.288 [2024-12-05 19:28:58.510083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.288 [2024-12-05 19:28:58.510421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.288 [2024-12-05 19:28:58.510841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.288 [2024-12-05 19:28:58.510976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.859 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.859 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:10:31.859 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:31.859 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.860 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:32.119 nvme0n1 00:10:32.119 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.120 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:32.120 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ie2eu.txt 00:10:32.120 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:32.120 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.120 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:32.120 true 00:10:32.120 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.120 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:32.120 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733426939 00:10:32.120 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64452 00:10:32.120 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:32.120 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:32.120 19:28:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:34.064 [2024-12-05 19:29:01.205068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:34.064 [2024-12-05 19:29:01.205710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:34.064 [2024-12-05 19:29:01.205761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:34.064 [2024-12-05 19:29:01.205777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.064 [2024-12-05 19:29:01.207827] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:34.064 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64452 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64452 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64452 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ie2eu.txt 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ie2eu.txt 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64430 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64430 ']' 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64430 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.064 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64430 00:10:34.327 killing process with pid 64430 00:10:34.327 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.327 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.327 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64430' 00:10:34.327 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64430 00:10:34.327 19:29:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64430 00:10:35.712 19:29:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:35.713 19:29:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:35.713 00:10:35.713 real 0m4.916s 00:10:35.713 user 0m17.445s 00:10:35.713 sys 0m0.504s 00:10:35.713 19:29:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.713 19:29:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:35.713 ************************************ 00:10:35.713 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:35.713 ************************************ 00:10:35.713 19:29:02 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:35.713 19:29:02 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:35.713 19:29:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:35.713 19:29:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.713 19:29:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:35.713 ************************************ 00:10:35.713 START TEST nvme_fio 00:10:35.713 ************************************ 00:10:35.713 19:29:02 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:10:35.713 19:29:02 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:35.713 19:29:02 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:35.713 19:29:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:35.713 19:29:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:35.713 19:29:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:10:35.713 19:29:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:35.713 19:29:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:35.713 19:29:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:35.970 19:29:02 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:35.970 19:29:02 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:35.970 19:29:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:35.970 19:29:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:35.970 19:29:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:35.970 19:29:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:35.970 19:29:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:35.970 19:29:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:35.970 19:29:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:36.228 19:29:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:36.228 19:29:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:36.228 19:29:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:36.485 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:36.485 fio-3.35 00:10:36.485 Starting 1 thread 00:10:41.751 00:10:41.751 test: (groupid=0, jobs=1): err= 0: pid=64587: Thu Dec 5 19:29:08 2024 00:10:41.751 read: IOPS=19.9k, BW=77.8MiB/s (81.5MB/s)(156MiB/2001msec) 00:10:41.751 slat (nsec): min=3377, max=62209, avg=5915.30, stdev=3669.82 00:10:41.751 clat (usec): min=523, max=11181, avg=3198.89, stdev=1281.34 00:10:41.751 lat (usec): min=533, max=11199, avg=3204.81, stdev=1283.70 00:10:41.751 clat percentiles (usec): 00:10:41.751 | 1.00th=[ 1778], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2507], 00:10:41.751 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2737], 00:10:41.751 | 70.00th=[ 2933], 80.00th=[ 3785], 90.00th=[ 5407], 95.00th=[ 6063], 00:10:41.751 | 99.00th=[ 7767], 99.50th=[ 8455], 99.90th=[ 9896], 99.95th=[10421], 00:10:41.751 | 99.99th=[11076] 00:10:41.751 bw ( KiB/s): min=66128, max=90712, per=95.22%, avg=75818.67, stdev=13091.76, samples=3 00:10:41.751 iops : min=16532, max=22678, avg=18954.67, stdev=3272.94, samples=3 00:10:41.751 write: IOPS=19.9k, BW=77.6MiB/s (81.4MB/s)(155MiB/2001msec); 0 zone resets 00:10:41.751 slat (usec): min=3, max=101, avg= 6.20, stdev= 3.81 00:10:41.751 clat (usec): min=568, max=11530, avg=3212.34, stdev=1283.45 00:10:41.751 lat (usec): min=578, max=11535, avg=3218.54, stdev=1285.79 00:10:41.751 clat percentiles (usec): 00:10:41.751 | 1.00th=[ 1778], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2507], 00:10:41.751 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2737], 00:10:41.751 | 70.00th=[ 2966], 80.00th=[ 3818], 90.00th=[ 5407], 95.00th=[ 6063], 00:10:41.751 | 99.00th=[ 7832], 99.50th=[ 8586], 99.90th=[ 9896], 99.95th=[10421], 00:10:41.751 | 99.99th=[11076] 00:10:41.751 bw ( KiB/s): min=66760, max=90208, per=95.44%, avg=75848.00, stdev=12581.64, samples=3 00:10:41.751 iops : min=16690, max=22552, avg=18962.00, stdev=3145.41, samples=3 00:10:41.751 lat (usec) : 750=0.01%, 1000=0.03% 00:10:41.751 lat (msec) : 2=2.11%, 4=79.91%, 10=17.86%, 20=0.08% 00:10:41.751 cpu : usr=98.90%, sys=0.15%, ctx=2, majf=0, minf=608 00:10:41.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:41.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.751 issued rwts: total=39834,39754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.751 00:10:41.751 Run status group 0 (all jobs): 00:10:41.751 READ: bw=77.8MiB/s (81.5MB/s), 77.8MiB/s-77.8MiB/s (81.5MB/s-81.5MB/s), io=156MiB (163MB), run=2001-2001msec 00:10:41.751 WRITE: bw=77.6MiB/s (81.4MB/s), 77.6MiB/s-77.6MiB/s (81.4MB/s-81.4MB/s), io=155MiB (163MB), run=2001-2001msec 00:10:42.007 ----------------------------------------------------- 00:10:42.007 Suppressions used: 00:10:42.007 count bytes template 00:10:42.007 1 32 /usr/src/fio/parse.c 00:10:42.007 1 8 libtcmalloc_minimal.so 00:10:42.007 ----------------------------------------------------- 00:10:42.007 00:10:42.007 19:29:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:42.007 19:29:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:42.007 19:29:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:42.007 19:29:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:42.264 19:29:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:42.264 19:29:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:42.521 19:29:09 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:42.521 19:29:09 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:42.521 19:29:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:42.521 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:42.521 fio-3.35 00:10:42.521 Starting 1 thread 00:10:49.096 00:10:49.097 test: (groupid=0, jobs=1): err= 0: pid=64648: Thu Dec 5 19:29:15 2024 00:10:49.097 read: IOPS=22.4k, BW=87.3MiB/s (91.6MB/s)(175MiB/2001msec) 00:10:49.097 slat (nsec): min=4202, max=64249, avg=5214.70, stdev=2234.47 00:10:49.097 clat (usec): min=476, max=6860, avg=2858.78, stdev=742.48 00:10:49.097 lat (usec): min=486, max=6865, avg=2863.99, stdev=743.70 00:10:49.097 clat percentiles (usec): 00:10:49.097 | 1.00th=[ 1795], 5.00th=[ 2245], 10.00th=[ 2442], 20.00th=[ 2507], 00:10:49.097 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638], 00:10:49.097 | 70.00th=[ 2737], 80.00th=[ 2933], 90.00th=[ 3982], 95.00th=[ 4490], 00:10:49.097 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6521], 99.95th=[ 6652], 00:10:49.097 | 99.99th=[ 6783] 00:10:49.097 bw ( KiB/s): min=87640, max=94280, per=100.00%, avg=90040.00, stdev=3682.61, samples=3 00:10:49.097 iops : min=21910, max=23570, avg=22510.00, stdev=920.65, samples=3 00:10:49.097 write: IOPS=22.2k, BW=86.7MiB/s (90.9MB/s)(174MiB/2001msec); 0 zone resets 00:10:49.097 slat (nsec): min=4292, max=74088, avg=5491.97, stdev=2300.09 00:10:49.097 clat (usec): min=401, max=6878, avg=2864.06, stdev=740.42 00:10:49.097 lat (usec): min=408, max=6883, avg=2869.55, stdev=741.69 00:10:49.097 clat percentiles (usec): 00:10:49.097 | 1.00th=[ 1795], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2507], 00:10:49.097 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671], 00:10:49.097 | 70.00th=[ 2737], 80.00th=[ 2966], 90.00th=[ 3982], 95.00th=[ 4490], 00:10:49.097 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6652], 99.95th=[ 6652], 00:10:49.097 | 99.99th=[ 6783] 00:10:49.097 bw ( KiB/s): min=87217, max=93616, per=100.00%, avg=90253.67, stdev=3211.91, samples=3 00:10:49.097 iops : min=21804, max=23404, avg=22563.33, stdev=803.09, samples=3 00:10:49.097 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.04% 00:10:49.097 lat (msec) : 2=2.11%, 4=88.09%, 10=9.76% 00:10:49.097 cpu : usr=99.25%, sys=0.05%, ctx=4, majf=0, minf=608 00:10:49.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:49.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.097 issued rwts: total=44734,44424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.097 00:10:49.097 Run status group 0 (all jobs): 00:10:49.097 READ: bw=87.3MiB/s (91.6MB/s), 87.3MiB/s-87.3MiB/s (91.6MB/s-91.6MB/s), io=175MiB (183MB), run=2001-2001msec 00:10:49.097 WRITE: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=174MiB (182MB), run=2001-2001msec 00:10:49.097 ----------------------------------------------------- 00:10:49.097 Suppressions used: 00:10:49.097 count bytes template 00:10:49.097 1 32 /usr/src/fio/parse.c 00:10:49.097 1 8 libtcmalloc_minimal.so 00:10:49.097 ----------------------------------------------------- 00:10:49.097 00:10:49.097 19:29:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:49.097 19:29:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:49.097 19:29:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:49.097 19:29:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:49.097 19:29:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:49.097 19:29:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:49.355 19:29:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:49.355 19:29:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:49.355 19:29:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:49.613 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:49.613 fio-3.35 00:10:49.613 Starting 1 thread 00:10:59.653 00:10:59.653 test: (groupid=0, jobs=1): err= 0: pid=64704: Thu Dec 5 19:29:25 2024 00:10:59.653 read: IOPS=21.3k, BW=83.3MiB/s (87.4MB/s)(167MiB/2001msec) 00:10:59.653 slat (nsec): min=3377, max=81199, avg=5214.39, stdev=2331.82 00:10:59.653 clat (usec): min=205, max=10225, avg=2987.51, stdev=1028.35 00:10:59.653 lat (usec): min=210, max=10229, avg=2992.73, stdev=1029.46 00:10:59.653 clat percentiles (usec): 00:10:59.653 | 1.00th=[ 1549], 5.00th=[ 2114], 10.00th=[ 2311], 20.00th=[ 2442], 00:10:59.653 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2704], 00:10:59.653 | 70.00th=[ 2900], 80.00th=[ 3326], 90.00th=[ 4359], 95.00th=[ 5342], 00:10:59.653 | 99.00th=[ 6849], 99.50th=[ 7439], 99.90th=[ 8455], 99.95th=[ 9110], 00:10:59.653 | 99.99th=[10159] 00:10:59.653 bw ( KiB/s): min=67720, max=95184, per=98.45%, avg=84024.00, stdev=14436.53, samples=3 00:10:59.653 iops : min=16930, max=23796, avg=21006.00, stdev=3609.13, samples=3 00:10:59.653 write: IOPS=21.2k, BW=82.7MiB/s (86.7MB/s)(166MiB/2001msec); 0 zone resets 00:10:59.653 slat (nsec): min=3467, max=69632, avg=5453.81, stdev=2449.31 00:10:59.653 clat (usec): min=244, max=13447, avg=3011.66, stdev=1066.89 00:10:59.653 lat (usec): min=249, max=13451, avg=3017.12, stdev=1067.97 00:10:59.653 clat percentiles (usec): 00:10:59.653 | 1.00th=[ 1582], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2442], 00:10:59.653 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2737], 00:10:59.653 | 70.00th=[ 2900], 80.00th=[ 3359], 90.00th=[ 4424], 95.00th=[ 5407], 00:10:59.653 | 99.00th=[ 6980], 99.50th=[ 7504], 99.90th=[ 9372], 99.95th=[11207], 00:10:59.653 | 99.99th=[13173] 00:10:59.653 bw ( KiB/s): min=67912, max=95232, per=99.30%, avg=84117.33, stdev=14353.81, samples=3 00:10:59.653 iops : min=16978, max=23808, avg=21029.33, stdev=3588.45, samples=3 00:10:59.653 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.06% 00:10:59.653 lat (msec) : 2=3.34%, 4=83.85%, 10=12.67%, 20=0.05% 00:10:59.653 cpu : usr=99.05%, sys=0.00%, ctx=4, majf=0, minf=608 00:10:59.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:59.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.653 issued rwts: total=42694,42377,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.653 00:10:59.653 Run status group 0 (all jobs): 00:10:59.653 READ: bw=83.3MiB/s (87.4MB/s), 83.3MiB/s-83.3MiB/s (87.4MB/s-87.4MB/s), io=167MiB (175MB), run=2001-2001msec 00:10:59.653 WRITE: bw=82.7MiB/s (86.7MB/s), 82.7MiB/s-82.7MiB/s (86.7MB/s-86.7MB/s), io=166MiB (174MB), run=2001-2001msec 00:10:59.653 ----------------------------------------------------- 00:10:59.653 Suppressions used: 00:10:59.653 count bytes template 00:10:59.653 1 32 /usr/src/fio/parse.c 00:10:59.653 1 8 libtcmalloc_minimal.so 00:10:59.653 ----------------------------------------------------- 00:10:59.653 00:10:59.653 19:29:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:59.653 19:29:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:59.653 19:29:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:59.653 19:29:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:59.653 19:29:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:59.653 19:29:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:59.653 19:29:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:59.653 19:29:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:59.653 19:29:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:59.653 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:59.653 fio-3.35 00:10:59.653 Starting 1 thread 00:11:17.726 00:11:17.726 test: (groupid=0, jobs=1): err= 0: pid=64766: Thu Dec 5 19:29:43 2024 00:11:17.726 read: IOPS=21.5k, BW=84.1MiB/s (88.2MB/s)(170MiB/2018msec) 00:11:17.726 slat (nsec): min=4207, max=58195, avg=5096.08, stdev=2108.80 00:11:17.726 clat (usec): min=954, max=25221, avg=2865.01, stdev=1017.01 00:11:17.726 lat (usec): min=971, max=25225, avg=2870.10, stdev=1017.88 00:11:17.726 clat percentiles (usec): 00:11:17.726 | 1.00th=[ 1844], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2442], 00:11:17.726 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2638], 00:11:17.726 | 70.00th=[ 2769], 80.00th=[ 2999], 90.00th=[ 3589], 95.00th=[ 4752], 00:11:17.726 | 99.00th=[ 6652], 99.50th=[ 7308], 99.90th=[ 9372], 99.95th=[21365], 00:11:17.726 | 99.99th=[23725] 00:11:17.726 bw ( KiB/s): min=69600, max=97397, per=100.00%, avg=86761.25, stdev=11957.14, samples=4 00:11:17.726 iops : min=17400, max=24349, avg=21690.25, stdev=2989.21, samples=4 00:11:17.726 write: IOPS=21.4k, BW=83.5MiB/s (87.5MB/s)(168MiB/2018msec); 0 zone resets 00:11:17.726 slat (nsec): min=4340, max=48806, avg=5383.27, stdev=2022.43 00:11:17.726 clat (usec): min=969, max=43507, avg=3079.09, stdev=2590.73 00:11:17.726 lat (usec): min=974, max=43513, avg=3084.48, stdev=2591.09 00:11:17.726 clat percentiles (usec): 00:11:17.726 | 1.00th=[ 1876], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2442], 00:11:17.726 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2638], 00:11:17.726 | 70.00th=[ 2769], 80.00th=[ 2999], 90.00th=[ 3654], 95.00th=[ 4883], 00:11:17.726 | 99.00th=[14091], 99.50th=[27132], 99.90th=[31327], 99.95th=[36963], 00:11:17.726 | 99.99th=[41681] 00:11:17.726 bw ( KiB/s): min=66416, max=96606, per=100.00%, avg=86023.50, stdev=13413.75, samples=4 00:11:17.726 iops : min=16604, max=24151, avg=21505.75, stdev=3353.31, samples=4 00:11:17.726 lat (usec) : 1000=0.01% 00:11:17.726 lat (msec) : 2=1.69%, 4=91.21%, 10=6.54%, 20=0.06%, 50=0.50% 00:11:17.726 cpu : usr=99.26%, sys=0.05%, ctx=3, majf=0, minf=606 00:11:17.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:17.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.726 issued rwts: total=43444,43124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.726 00:11:17.726 Run status group 0 (all jobs): 00:11:17.726 READ: bw=84.1MiB/s (88.2MB/s), 84.1MiB/s-84.1MiB/s (88.2MB/s-88.2MB/s), io=170MiB (178MB), run=2018-2018msec 00:11:17.726 WRITE: bw=83.5MiB/s (87.5MB/s), 83.5MiB/s-83.5MiB/s (87.5MB/s-87.5MB/s), io=168MiB (177MB), run=2018-2018msec 00:11:17.726 ----------------------------------------------------- 00:11:17.726 Suppressions used: 00:11:17.726 count bytes template 00:11:17.726 1 32 /usr/src/fio/parse.c 00:11:17.726 1 8 libtcmalloc_minimal.so 00:11:17.726 ----------------------------------------------------- 00:11:17.726 00:11:17.726 19:29:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:17.726 19:29:43 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:17.726 00:11:17.726 real 0m40.707s 00:11:17.726 user 0m18.410s 00:11:17.726 sys 0m42.812s 00:11:17.726 19:29:43 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.726 19:29:43 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:17.726 ************************************ 00:11:17.726 END TEST nvme_fio 00:11:17.726 ************************************ 00:11:17.726 00:11:17.726 real 1m50.456s 00:11:17.726 user 3m40.164s 00:11:17.726 sys 0m53.515s 00:11:17.726 19:29:43 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.726 19:29:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:17.726 ************************************ 00:11:17.726 END TEST nvme 00:11:17.726 ************************************ 00:11:17.726 19:29:43 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:11:17.726 19:29:43 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:17.726 19:29:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:17.726 19:29:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.726 19:29:43 -- common/autotest_common.sh@10 -- # set +x 00:11:17.727 ************************************ 00:11:17.727 START TEST nvme_scc 00:11:17.727 ************************************ 00:11:17.727 19:29:43 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:17.727 * Looking for test storage... 00:11:17.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:17.727 19:29:43 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:17.727 19:29:43 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:17.727 19:29:43 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:17.727 19:29:43 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@345 -- # : 1 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@368 -- # return 0 00:11:17.727 19:29:43 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.727 19:29:43 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:17.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.727 --rc genhtml_branch_coverage=1 00:11:17.727 --rc genhtml_function_coverage=1 00:11:17.727 --rc genhtml_legend=1 00:11:17.727 --rc geninfo_all_blocks=1 00:11:17.727 --rc geninfo_unexecuted_blocks=1 00:11:17.727 00:11:17.727 ' 00:11:17.727 19:29:43 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:17.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.727 --rc genhtml_branch_coverage=1 00:11:17.727 --rc genhtml_function_coverage=1 00:11:17.727 --rc genhtml_legend=1 00:11:17.727 --rc geninfo_all_blocks=1 00:11:17.727 --rc geninfo_unexecuted_blocks=1 00:11:17.727 00:11:17.727 ' 00:11:17.727 19:29:43 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:17.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.727 --rc genhtml_branch_coverage=1 00:11:17.727 --rc genhtml_function_coverage=1 00:11:17.727 --rc genhtml_legend=1 00:11:17.727 --rc geninfo_all_blocks=1 00:11:17.727 --rc geninfo_unexecuted_blocks=1 00:11:17.727 00:11:17.727 ' 00:11:17.727 19:29:43 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:17.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.727 --rc genhtml_branch_coverage=1 00:11:17.727 --rc genhtml_function_coverage=1 00:11:17.727 --rc genhtml_legend=1 00:11:17.727 --rc geninfo_all_blocks=1 00:11:17.727 --rc geninfo_unexecuted_blocks=1 00:11:17.727 00:11:17.727 ' 00:11:17.727 19:29:43 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.727 19:29:43 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.727 19:29:43 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.727 19:29:43 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.727 19:29:43 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.727 19:29:43 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:17.727 19:29:43 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:17.727 19:29:43 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:17.727 19:29:43 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:17.727 19:29:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:17.727 19:29:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:17.727 19:29:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:17.727 19:29:43 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:17.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.727 Waiting for block devices as requested 00:11:17.727 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.727 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.727 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.727 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.020 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:23.021 19:29:49 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:23.021 19:29:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:23.021 19:29:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:23.021 19:29:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.021 19:29:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:23.021 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.022 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:23.023 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.024 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.025 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.026 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:23.027 19:29:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:23.027 19:29:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:23.028 19:29:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:23.028 19:29:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.028 19:29:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.028 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:23.029 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.030 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.031 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.032 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.033 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:23.034 19:29:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:23.034 19:29:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:23.034 19:29:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.034 19:29:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:23.034 19:29:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.035 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.036 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:23.037 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.038 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.039 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.040 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.041 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.042 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:23.043 19:29:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:23.044 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.045 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:23.046 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.047 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.048 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:23.049 19:29:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:23.049 19:29:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:23.049 19:29:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.049 19:29:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:23.049 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:23.050 19:29:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.050 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.051 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:23.052 19:29:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:23.052 19:29:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:23.053 19:29:50 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:23.053 19:29:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:23.053 19:29:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:23.053 19:29:50 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:23.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:23.872 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:23.872 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:23.872 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:23.872 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:23.873 19:29:51 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:23.873 19:29:51 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:23.873 19:29:51 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.873 19:29:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:23.873 ************************************ 00:11:23.873 START TEST nvme_simple_copy 00:11:23.873 ************************************ 00:11:23.873 19:29:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:24.130 Initializing NVMe Controllers 00:11:24.130 Attaching to 0000:00:10.0 00:11:24.130 Controller supports SCC. Attached to 0000:00:10.0 00:11:24.130 Namespace ID: 1 size: 6GB 00:11:24.130 Initialization complete. 00:11:24.130 00:11:24.130 Controller QEMU NVMe Ctrl (12340 ) 00:11:24.130 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:24.130 Namespace Block Size:4096 00:11:24.130 Writing LBAs 0 to 63 with Random Data 00:11:24.130 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:24.130 LBAs matching Written Data: 64 00:11:24.130 00:11:24.130 real 0m0.250s 00:11:24.130 user 0m0.085s 00:11:24.130 sys 0m0.064s 00:11:24.130 19:29:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.130 ************************************ 00:11:24.130 END TEST nvme_simple_copy 00:11:24.130 ************************************ 00:11:24.130 19:29:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:24.130 ************************************ 00:11:24.130 END TEST nvme_scc 00:11:24.130 ************************************ 00:11:24.130 00:11:24.130 real 0m7.642s 00:11:24.130 user 0m1.122s 00:11:24.130 sys 0m1.390s 00:11:24.130 19:29:51 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.130 19:29:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:24.130 19:29:51 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:24.130 19:29:51 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:24.130 19:29:51 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:24.130 19:29:51 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:24.130 19:29:51 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:24.130 19:29:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:24.130 19:29:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.130 19:29:51 -- common/autotest_common.sh@10 -- # set +x 00:11:24.130 ************************************ 00:11:24.130 START TEST nvme_fdp 00:11:24.130 ************************************ 00:11:24.130 19:29:51 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:11:24.388 * Looking for test storage... 00:11:24.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:24.388 19:29:51 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:24.388 19:29:51 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:24.388 19:29:51 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:24.388 19:29:51 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:24.388 19:29:51 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.388 19:29:51 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.388 --rc genhtml_branch_coverage=1 00:11:24.388 --rc genhtml_function_coverage=1 00:11:24.388 --rc genhtml_legend=1 00:11:24.388 --rc geninfo_all_blocks=1 00:11:24.388 --rc geninfo_unexecuted_blocks=1 00:11:24.388 00:11:24.388 ' 00:11:24.388 19:29:51 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.388 --rc genhtml_branch_coverage=1 00:11:24.388 --rc genhtml_function_coverage=1 00:11:24.388 --rc genhtml_legend=1 00:11:24.388 --rc geninfo_all_blocks=1 00:11:24.388 --rc geninfo_unexecuted_blocks=1 00:11:24.388 00:11:24.388 ' 00:11:24.388 19:29:51 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.388 --rc genhtml_branch_coverage=1 00:11:24.388 --rc genhtml_function_coverage=1 00:11:24.388 --rc genhtml_legend=1 00:11:24.388 --rc geninfo_all_blocks=1 00:11:24.388 --rc geninfo_unexecuted_blocks=1 00:11:24.388 00:11:24.388 ' 00:11:24.388 19:29:51 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.388 --rc genhtml_branch_coverage=1 00:11:24.388 --rc genhtml_function_coverage=1 00:11:24.388 --rc genhtml_legend=1 00:11:24.388 --rc geninfo_all_blocks=1 00:11:24.388 --rc geninfo_unexecuted_blocks=1 00:11:24.388 00:11:24.388 ' 00:11:24.388 19:29:51 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.388 19:29:51 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.388 19:29:51 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.388 19:29:51 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.388 19:29:51 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.388 19:29:51 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:24.388 19:29:51 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:24.388 19:29:51 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:24.388 19:29:51 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:24.388 19:29:51 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:24.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:24.902 Waiting for block devices as requested 00:11:24.902 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.902 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.902 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.160 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:30.439 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:30.439 19:29:57 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:30.439 19:29:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:30.439 19:29:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:30.439 19:29:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:30.439 19:29:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.439 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.440 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.441 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:30.442 19:29:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:30.443 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:30.444 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:30.445 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:30.446 19:29:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:30.446 19:29:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:30.446 19:29:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:30.446 19:29:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:30.446 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.447 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:30.448 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.449 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.450 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.451 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.452 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:30.453 19:29:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:30.453 19:29:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:30.453 19:29:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:30.453 19:29:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.453 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.454 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.455 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.456 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:30.457 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:30.458 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.459 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.460 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:30.461 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.462 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:30.463 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.464 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:30.465 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:30.466 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:30.467 19:29:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:30.467 19:29:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:30.467 19:29:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:30.467 19:29:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.467 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.468 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:30.469 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:30.470 19:29:57 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:30.470 19:29:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:30.471 19:29:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:30.471 19:29:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:30.729 19:29:57 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:30.729 19:29:57 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:30.729 19:29:57 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:30.729 19:29:57 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:30.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:31.551 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:31.551 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:31.551 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:31.551 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:31.552 19:29:58 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:31.552 19:29:58 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:31.552 19:29:58 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.552 19:29:58 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:31.552 ************************************ 00:11:31.552 START TEST nvme_flexible_data_placement 00:11:31.552 ************************************ 00:11:31.552 19:29:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:31.809 Initializing NVMe Controllers 00:11:31.809 Attaching to 0000:00:13.0 00:11:31.809 Controller supports FDP Attached to 0000:00:13.0 00:11:31.809 Namespace ID: 1 Endurance Group ID: 1 00:11:31.809 Initialization complete. 00:11:31.809 00:11:31.809 ================================== 00:11:31.809 == FDP tests for Namespace: #01 == 00:11:31.809 ================================== 00:11:31.809 00:11:31.809 Get Feature: FDP: 00:11:31.809 ================= 00:11:31.809 Enabled: Yes 00:11:31.809 FDP configuration Index: 0 00:11:31.809 00:11:31.809 FDP configurations log page 00:11:31.809 =========================== 00:11:31.809 Number of FDP configurations: 1 00:11:31.809 Version: 0 00:11:31.809 Size: 112 00:11:31.809 FDP Configuration Descriptor: 0 00:11:31.809 Descriptor Size: 96 00:11:31.809 Reclaim Group Identifier format: 2 00:11:31.809 FDP Volatile Write Cache: Not Present 00:11:31.809 FDP Configuration: Valid 00:11:31.809 Vendor Specific Size: 0 00:11:31.809 Number of Reclaim Groups: 2 00:11:31.809 Number of Recalim Unit Handles: 8 00:11:31.809 Max Placement Identifiers: 128 00:11:31.809 Number of Namespaces Suppprted: 256 00:11:31.809 Reclaim unit Nominal Size: 6000000 bytes 00:11:31.809 Estimated Reclaim Unit Time Limit: Not Reported 00:11:31.809 RUH Desc #000: RUH Type: Initially Isolated 00:11:31.809 RUH Desc #001: RUH Type: Initially Isolated 00:11:31.809 RUH Desc #002: RUH Type: Initially Isolated 00:11:31.809 RUH Desc #003: RUH Type: Initially Isolated 00:11:31.809 RUH Desc #004: RUH Type: Initially Isolated 00:11:31.809 RUH Desc #005: RUH Type: Initially Isolated 00:11:31.809 RUH Desc #006: RUH Type: Initially Isolated 00:11:31.809 RUH Desc #007: RUH Type: Initially Isolated 00:11:31.809 00:11:31.809 FDP reclaim unit handle usage log page 00:11:31.809 ====================================== 00:11:31.809 Number of Reclaim Unit Handles: 8 00:11:31.809 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:31.809 RUH Usage Desc #001: RUH Attributes: Unused 00:11:31.809 RUH Usage Desc #002: RUH Attributes: Unused 00:11:31.809 RUH Usage Desc #003: RUH Attributes: Unused 00:11:31.809 RUH Usage Desc #004: RUH Attributes: Unused 00:11:31.809 RUH Usage Desc #005: RUH Attributes: Unused 00:11:31.810 RUH Usage Desc #006: RUH Attributes: Unused 00:11:31.810 RUH Usage Desc #007: RUH Attributes: Unused 00:11:31.810 00:11:31.810 FDP statistics log page 00:11:31.810 ======================= 00:11:31.810 Host bytes with metadata written: 875753472 00:11:31.810 Media bytes with metadata written: 875831296 00:11:31.810 Media bytes erased: 0 00:11:31.810 00:11:31.810 FDP Reclaim unit handle status 00:11:31.810 ============================== 00:11:31.810 Number of RUHS descriptors: 2 00:11:31.810 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001cd1 00:11:31.810 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:31.810 00:11:31.810 FDP write on placement id: 0 success 00:11:31.810 00:11:31.810 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:31.810 00:11:31.810 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:31.810 00:11:31.810 Get Feature: FDP Events for Placement handle: #0 00:11:31.810 ======================== 00:11:31.810 Number of FDP Events: 6 00:11:31.810 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:31.810 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:31.810 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:31.810 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:31.810 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:31.810 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:31.810 00:11:31.810 FDP events log page 00:11:31.810 =================== 00:11:31.810 Number of FDP events: 1 00:11:31.810 FDP Event #0: 00:11:31.810 Event Type: RU Not Written to Capacity 00:11:31.810 Placement Identifier: Valid 00:11:31.810 NSID: Valid 00:11:31.810 Location: Valid 00:11:31.810 Placement Identifier: 0 00:11:31.810 Event Timestamp: 5 00:11:31.810 Namespace Identifier: 1 00:11:31.810 Reclaim Group Identifier: 0 00:11:31.810 Reclaim Unit Handle Identifier: 0 00:11:31.810 00:11:31.810 FDP test passed 00:11:31.810 00:11:31.810 real 0m0.230s 00:11:31.810 user 0m0.072s 00:11:31.810 sys 0m0.057s 00:11:31.810 ************************************ 00:11:31.810 END TEST nvme_flexible_data_placement 00:11:31.810 ************************************ 00:11:31.810 19:29:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.810 19:29:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:31.810 ************************************ 00:11:31.810 END TEST nvme_fdp 00:11:31.810 ************************************ 00:11:31.810 00:11:31.810 real 0m7.552s 00:11:31.810 user 0m1.123s 00:11:31.810 sys 0m1.316s 00:11:31.810 19:29:58 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.810 19:29:58 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:31.810 19:29:58 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:31.810 19:29:58 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:31.810 19:29:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:31.810 19:29:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.810 19:29:58 -- common/autotest_common.sh@10 -- # set +x 00:11:31.810 ************************************ 00:11:31.810 START TEST nvme_rpc 00:11:31.810 ************************************ 00:11:31.810 19:29:58 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:31.810 * Looking for test storage... 00:11:31.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:31.810 19:29:59 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:31.810 19:29:59 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:31.810 19:29:59 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:32.067 19:29:59 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.067 19:29:59 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:32.067 19:29:59 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.067 19:29:59 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:32.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.067 --rc genhtml_branch_coverage=1 00:11:32.067 --rc genhtml_function_coverage=1 00:11:32.067 --rc genhtml_legend=1 00:11:32.067 --rc geninfo_all_blocks=1 00:11:32.067 --rc geninfo_unexecuted_blocks=1 00:11:32.068 00:11:32.068 ' 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:32.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.068 --rc genhtml_branch_coverage=1 00:11:32.068 --rc genhtml_function_coverage=1 00:11:32.068 --rc genhtml_legend=1 00:11:32.068 --rc geninfo_all_blocks=1 00:11:32.068 --rc geninfo_unexecuted_blocks=1 00:11:32.068 00:11:32.068 ' 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:32.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.068 --rc genhtml_branch_coverage=1 00:11:32.068 --rc genhtml_function_coverage=1 00:11:32.068 --rc genhtml_legend=1 00:11:32.068 --rc geninfo_all_blocks=1 00:11:32.068 --rc geninfo_unexecuted_blocks=1 00:11:32.068 00:11:32.068 ' 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:32.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.068 --rc genhtml_branch_coverage=1 00:11:32.068 --rc genhtml_function_coverage=1 00:11:32.068 --rc genhtml_legend=1 00:11:32.068 --rc geninfo_all_blocks=1 00:11:32.068 --rc geninfo_unexecuted_blocks=1 00:11:32.068 00:11:32.068 ' 00:11:32.068 19:29:59 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.068 19:29:59 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:32.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.068 19:29:59 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:32.068 19:29:59 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66142 00:11:32.068 19:29:59 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:32.068 19:29:59 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66142 00:11:32.068 19:29:59 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 66142 ']' 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.068 19:29:59 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.068 [2024-12-05 19:29:59.234689] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:11:32.068 [2024-12-05 19:29:59.234794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66142 ] 00:11:32.325 [2024-12-05 19:29:59.395914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:32.325 [2024-12-05 19:29:59.492132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.325 [2024-12-05 19:29:59.492133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.890 19:30:00 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.890 19:30:00 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:32.890 19:30:00 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:33.147 Nvme0n1 00:11:33.147 19:30:00 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:33.147 19:30:00 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:33.405 request: 00:11:33.405 { 00:11:33.405 "bdev_name": "Nvme0n1", 00:11:33.405 "filename": "non_existing_file", 00:11:33.405 "method": "bdev_nvme_apply_firmware", 00:11:33.405 "req_id": 1 00:11:33.405 } 00:11:33.405 Got JSON-RPC error response 00:11:33.405 response: 00:11:33.405 { 00:11:33.405 "code": -32603, 00:11:33.405 "message": "open file failed." 00:11:33.405 } 00:11:33.405 19:30:00 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:33.405 19:30:00 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:33.405 19:30:00 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:33.663 19:30:00 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:33.663 19:30:00 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66142 00:11:33.663 19:30:00 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 66142 ']' 00:11:33.663 19:30:00 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 66142 00:11:33.663 19:30:00 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:33.663 19:30:00 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.663 19:30:00 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66142 00:11:33.663 killing process with pid 66142 00:11:33.663 19:30:00 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.663 19:30:00 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.663 19:30:00 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66142' 00:11:33.663 19:30:00 nvme_rpc -- common/autotest_common.sh@973 -- # kill 66142 00:11:33.663 19:30:00 nvme_rpc -- common/autotest_common.sh@978 -- # wait 66142 00:11:35.032 ************************************ 00:11:35.032 END TEST nvme_rpc 00:11:35.032 ************************************ 00:11:35.032 00:11:35.032 real 0m3.243s 00:11:35.032 user 0m6.177s 00:11:35.032 sys 0m0.483s 00:11:35.032 19:30:02 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.032 19:30:02 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.032 19:30:02 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:35.032 19:30:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:35.032 19:30:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.032 19:30:02 -- common/autotest_common.sh@10 -- # set +x 00:11:35.032 ************************************ 00:11:35.032 START TEST nvme_rpc_timeouts 00:11:35.032 ************************************ 00:11:35.032 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:35.297 * Looking for test storage... 00:11:35.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.297 19:30:02 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:35.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.297 --rc genhtml_branch_coverage=1 00:11:35.297 --rc genhtml_function_coverage=1 00:11:35.297 --rc genhtml_legend=1 00:11:35.297 --rc geninfo_all_blocks=1 00:11:35.297 --rc geninfo_unexecuted_blocks=1 00:11:35.297 00:11:35.297 ' 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:35.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.297 --rc genhtml_branch_coverage=1 00:11:35.297 --rc genhtml_function_coverage=1 00:11:35.297 --rc genhtml_legend=1 00:11:35.297 --rc geninfo_all_blocks=1 00:11:35.297 --rc geninfo_unexecuted_blocks=1 00:11:35.297 00:11:35.297 ' 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:35.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.297 --rc genhtml_branch_coverage=1 00:11:35.297 --rc genhtml_function_coverage=1 00:11:35.297 --rc genhtml_legend=1 00:11:35.297 --rc geninfo_all_blocks=1 00:11:35.297 --rc geninfo_unexecuted_blocks=1 00:11:35.297 00:11:35.297 ' 00:11:35.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:35.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.297 --rc genhtml_branch_coverage=1 00:11:35.297 --rc genhtml_function_coverage=1 00:11:35.297 --rc genhtml_legend=1 00:11:35.297 --rc geninfo_all_blocks=1 00:11:35.297 --rc geninfo_unexecuted_blocks=1 00:11:35.297 00:11:35.297 ' 00:11:35.297 19:30:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.297 19:30:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66207 00:11:35.297 19:30:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66207 00:11:35.297 19:30:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66239 00:11:35.297 19:30:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:35.297 19:30:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66239 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 66239 ']' 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.297 19:30:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:35.297 19:30:02 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:35.297 [2024-12-05 19:30:02.465728] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:11:35.297 [2024-12-05 19:30:02.465941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66239 ] 00:11:35.554 [2024-12-05 19:30:02.627504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:35.554 [2024-12-05 19:30:02.725956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.554 [2024-12-05 19:30:02.726105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.118 19:30:03 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.118 19:30:03 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:11:36.118 19:30:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:36.118 Checking default timeout settings: 00:11:36.118 19:30:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:36.683 19:30:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:36.683 Making settings changes with rpc: 00:11:36.683 19:30:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:36.683 Check default vs. modified settings: 00:11:36.683 19:30:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:36.683 19:30:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66207 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66207 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:36.940 Setting action_on_timeout is changed as expected. 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:36.940 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:36.941 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66207 00:11:36.941 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:36.941 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:36.941 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:36.941 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66207 00:11:36.941 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:37.199 Setting timeout_us is changed as expected. 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66207 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66207 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:37.199 Setting timeout_admin_us is changed as expected. 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66207 /tmp/settings_modified_66207 00:11:37.199 19:30:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66239 00:11:37.199 19:30:04 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 66239 ']' 00:11:37.199 19:30:04 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 66239 00:11:37.199 19:30:04 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:11:37.199 19:30:04 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.199 19:30:04 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66239 00:11:37.199 killing process with pid 66239 00:11:37.199 19:30:04 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.199 19:30:04 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.199 19:30:04 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66239' 00:11:37.199 19:30:04 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 66239 00:11:37.199 19:30:04 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 66239 00:11:38.573 RPC TIMEOUT SETTING TEST PASSED. 00:11:38.573 19:30:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:38.573 ************************************ 00:11:38.573 END TEST nvme_rpc_timeouts 00:11:38.573 ************************************ 00:11:38.573 00:11:38.573 real 0m3.269s 00:11:38.573 user 0m6.365s 00:11:38.573 sys 0m0.475s 00:11:38.573 19:30:05 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.573 19:30:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:38.573 19:30:05 -- spdk/autotest.sh@239 -- # uname -s 00:11:38.573 19:30:05 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:38.573 19:30:05 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:38.573 19:30:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:38.573 19:30:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.573 19:30:05 -- common/autotest_common.sh@10 -- # set +x 00:11:38.573 ************************************ 00:11:38.573 START TEST sw_hotplug 00:11:38.573 ************************************ 00:11:38.573 19:30:05 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:38.573 * Looking for test storage... 00:11:38.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:38.573 19:30:05 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:38.573 19:30:05 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:11:38.573 19:30:05 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:38.573 19:30:05 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.573 19:30:05 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:38.573 19:30:05 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.573 19:30:05 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:38.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.573 --rc genhtml_branch_coverage=1 00:11:38.573 --rc genhtml_function_coverage=1 00:11:38.573 --rc genhtml_legend=1 00:11:38.573 --rc geninfo_all_blocks=1 00:11:38.573 --rc geninfo_unexecuted_blocks=1 00:11:38.573 00:11:38.573 ' 00:11:38.573 19:30:05 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:38.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.573 --rc genhtml_branch_coverage=1 00:11:38.573 --rc genhtml_function_coverage=1 00:11:38.573 --rc genhtml_legend=1 00:11:38.573 --rc geninfo_all_blocks=1 00:11:38.573 --rc geninfo_unexecuted_blocks=1 00:11:38.573 00:11:38.573 ' 00:11:38.573 19:30:05 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:38.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.573 --rc genhtml_branch_coverage=1 00:11:38.573 --rc genhtml_function_coverage=1 00:11:38.573 --rc genhtml_legend=1 00:11:38.573 --rc geninfo_all_blocks=1 00:11:38.573 --rc geninfo_unexecuted_blocks=1 00:11:38.573 00:11:38.573 ' 00:11:38.573 19:30:05 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:38.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.573 --rc genhtml_branch_coverage=1 00:11:38.573 --rc genhtml_function_coverage=1 00:11:38.573 --rc genhtml_legend=1 00:11:38.573 --rc geninfo_all_blocks=1 00:11:38.573 --rc geninfo_unexecuted_blocks=1 00:11:38.573 00:11:38.573 ' 00:11:38.573 19:30:05 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:38.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:38.890 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:38.890 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:38.890 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:38.890 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:39.163 19:30:06 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:39.163 19:30:06 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:39.163 19:30:06 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:39.163 19:30:06 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:39.163 19:30:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:39.164 19:30:06 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:39.164 19:30:06 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:39.164 19:30:06 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:39.164 19:30:06 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:39.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:39.422 Waiting for block devices as requested 00:11:39.422 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.680 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.680 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.680 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:44.945 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:44.945 19:30:11 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:44.945 19:30:11 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:44.945 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:45.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:45.202 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:45.460 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:45.460 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:45.460 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:45.718 19:30:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67091 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:45.718 19:30:12 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:45.718 19:30:12 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:45.718 19:30:12 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:45.718 19:30:12 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:45.718 19:30:12 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:45.718 19:30:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:45.976 Initializing NVMe Controllers 00:11:45.976 Attaching to 0000:00:10.0 00:11:45.976 Attaching to 0000:00:11.0 00:11:45.976 Attached to 0000:00:11.0 00:11:45.976 Attached to 0000:00:10.0 00:11:45.976 Initialization complete. Starting I/O... 00:11:45.976 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:45.976 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:45.976 00:11:46.909 QEMU NVMe Ctrl (12341 ): 2666 I/Os completed (+2666) 00:11:46.909 QEMU NVMe Ctrl (12340 ): 2526 I/Os completed (+2526) 00:11:46.909 00:11:47.905 QEMU NVMe Ctrl (12341 ): 6161 I/Os completed (+3495) 00:11:47.905 QEMU NVMe Ctrl (12340 ): 5974 I/Os completed (+3448) 00:11:47.905 00:11:48.836 QEMU NVMe Ctrl (12341 ): 9506 I/Os completed (+3345) 00:11:48.836 QEMU NVMe Ctrl (12340 ): 9052 I/Os completed (+3078) 00:11:48.836 00:11:49.768 QEMU NVMe Ctrl (12341 ): 13050 I/Os completed (+3544) 00:11:49.768 QEMU NVMe Ctrl (12340 ): 12511 I/Os completed (+3459) 00:11:49.768 00:11:51.143 QEMU NVMe Ctrl (12341 ): 16186 I/Os completed (+3136) 00:11:51.143 QEMU NVMe Ctrl (12340 ): 15645 I/Os completed (+3134) 00:11:51.143 00:11:51.707 19:30:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:51.707 19:30:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.707 19:30:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.707 [2024-12-05 19:30:18.820853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:51.707 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:51.707 [2024-12-05 19:30:18.822311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.707 [2024-12-05 19:30:18.822364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.707 [2024-12-05 19:30:18.822382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 [2024-12-05 19:30:18.822401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:51.708 [2024-12-05 19:30:18.824162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 [2024-12-05 19:30:18.824197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 [2024-12-05 19:30:18.824210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 [2024-12-05 19:30:18.824225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 19:30:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.708 19:30:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.708 [2024-12-05 19:30:18.843848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:51.708 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:51.708 [2024-12-05 19:30:18.845069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 [2024-12-05 19:30:18.845213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 [2024-12-05 19:30:18.845248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 [2024-12-05 19:30:18.845274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:51.708 [2024-12-05 19:30:18.846972] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 [2024-12-05 19:30:18.847094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 [2024-12-05 19:30:18.847115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 [2024-12-05 19:30:18.847127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.708 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:51.708 EAL: Scan for (pci) bus failed. 00:11:51.708 19:30:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:51.708 19:30:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:51.708 19:30:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:51.708 19:30:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:51.708 19:30:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:51.965 19:30:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:51.965 19:30:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:51.965 19:30:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:51.965 19:30:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:51.965 19:30:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:51.965 Attaching to 0000:00:10.0 00:11:51.965 Attached to 0000:00:10.0 00:11:51.965 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:51.965 00:11:51.965 19:30:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:51.965 19:30:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:51.965 19:30:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:51.965 Attaching to 0000:00:11.0 00:11:51.965 Attached to 0000:00:11.0 00:11:52.960 QEMU NVMe Ctrl (12340 ): 3267 I/Os completed (+3267) 00:11:52.960 QEMU NVMe Ctrl (12341 ): 2934 I/Os completed (+2934) 00:11:52.960 00:11:53.891 QEMU NVMe Ctrl (12340 ): 6432 I/Os completed (+3165) 00:11:53.891 QEMU NVMe Ctrl (12341 ): 5991 I/Os completed (+3057) 00:11:53.891 00:11:54.824 QEMU NVMe Ctrl (12340 ): 9960 I/Os completed (+3528) 00:11:54.824 QEMU NVMe Ctrl (12341 ): 9480 I/Os completed (+3489) 00:11:54.824 00:11:56.197 QEMU NVMe Ctrl (12340 ): 13643 I/Os completed (+3683) 00:11:56.197 QEMU NVMe Ctrl (12341 ): 13164 I/Os completed (+3684) 00:11:56.197 00:11:56.763 QEMU NVMe Ctrl (12340 ): 17241 I/Os completed (+3598) 00:11:56.763 QEMU NVMe Ctrl (12341 ): 16727 I/Os completed (+3563) 00:11:56.763 00:11:58.139 QEMU NVMe Ctrl (12340 ): 20392 I/Os completed (+3151) 00:11:58.139 QEMU NVMe Ctrl (12341 ): 19894 I/Os completed (+3167) 00:11:58.139 00:11:59.074 QEMU NVMe Ctrl (12340 ): 23736 I/Os completed (+3344) 00:11:59.074 QEMU NVMe Ctrl (12341 ): 23205 I/Os completed (+3311) 00:11:59.074 00:12:00.006 QEMU NVMe Ctrl (12340 ): 26930 I/Os completed (+3194) 00:12:00.006 QEMU NVMe Ctrl (12341 ): 26302 I/Os completed (+3097) 00:12:00.006 00:12:00.938 QEMU NVMe Ctrl (12340 ): 30131 I/Os completed (+3201) 00:12:00.938 QEMU NVMe Ctrl (12341 ): 29639 I/Os completed (+3337) 00:12:00.938 00:12:01.872 QEMU NVMe Ctrl (12340 ): 33439 I/Os completed (+3308) 00:12:01.872 QEMU NVMe Ctrl (12341 ): 33061 I/Os completed (+3422) 00:12:01.872 00:12:02.836 QEMU NVMe Ctrl (12340 ): 36780 I/Os completed (+3341) 00:12:02.836 QEMU NVMe Ctrl (12341 ): 36277 I/Os completed (+3216) 00:12:02.836 00:12:03.770 QEMU NVMe Ctrl (12340 ): 40055 I/Os completed (+3275) 00:12:03.770 QEMU NVMe Ctrl (12341 ): 39505 I/Os completed (+3228) 00:12:03.770 00:12:04.028 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:04.028 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:04.028 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:04.028 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:04.028 [2024-12-05 19:30:31.094383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:04.028 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:04.028 [2024-12-05 19:30:31.095627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.028 [2024-12-05 19:30:31.095784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.028 [2024-12-05 19:30:31.095824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.028 [2024-12-05 19:30:31.095912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.028 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:04.029 [2024-12-05 19:30:31.097930] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.029 [2024-12-05 19:30:31.098043] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.029 [2024-12-05 19:30:31.098075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.029 [2024-12-05 19:30:31.098140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.029 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:04.029 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:04.029 [2024-12-05 19:30:31.120141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:04.029 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:04.029 [2024-12-05 19:30:31.121326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.029 [2024-12-05 19:30:31.121443] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.029 [2024-12-05 19:30:31.121527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.029 [2024-12-05 19:30:31.121565] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.029 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:04.029 [2024-12-05 19:30:31.123347] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.029 [2024-12-05 19:30:31.123385] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.029 [2024-12-05 19:30:31.123401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.029 [2024-12-05 19:30:31.123415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.029 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:04.029 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:04.029 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:04.029 EAL: Scan for (pci) bus failed. 00:12:04.029 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:04.029 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:04.029 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:04.029 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:04.029 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:04.029 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:04.029 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:04.029 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:04.029 Attaching to 0000:00:10.0 00:12:04.287 Attached to 0000:00:10.0 00:12:04.287 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:04.287 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:04.287 19:30:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:04.287 Attaching to 0000:00:11.0 00:12:04.287 Attached to 0000:00:11.0 00:12:04.852 QEMU NVMe Ctrl (12340 ): 2571 I/Os completed (+2571) 00:12:04.852 QEMU NVMe Ctrl (12341 ): 2167 I/Os completed (+2167) 00:12:04.852 00:12:05.785 QEMU NVMe Ctrl (12340 ): 5769 I/Os completed (+3198) 00:12:05.785 QEMU NVMe Ctrl (12341 ): 5211 I/Os completed (+3044) 00:12:05.785 00:12:07.159 QEMU NVMe Ctrl (12340 ): 8897 I/Os completed (+3128) 00:12:07.159 QEMU NVMe Ctrl (12341 ): 8264 I/Os completed (+3053) 00:12:07.159 00:12:08.096 QEMU NVMe Ctrl (12340 ): 12496 I/Os completed (+3599) 00:12:08.096 QEMU NVMe Ctrl (12341 ): 11885 I/Os completed (+3621) 00:12:08.096 00:12:09.030 QEMU NVMe Ctrl (12340 ): 16017 I/Os completed (+3521) 00:12:09.030 QEMU NVMe Ctrl (12341 ): 15219 I/Os completed (+3334) 00:12:09.030 00:12:09.964 QEMU NVMe Ctrl (12340 ): 19277 I/Os completed (+3260) 00:12:09.964 QEMU NVMe Ctrl (12341 ): 18483 I/Os completed (+3264) 00:12:09.964 00:12:10.896 QEMU NVMe Ctrl (12340 ): 22444 I/Os completed (+3167) 00:12:10.896 QEMU NVMe Ctrl (12341 ): 21546 I/Os completed (+3063) 00:12:10.896 00:12:11.829 QEMU NVMe Ctrl (12340 ): 26018 I/Os completed (+3574) 00:12:11.829 QEMU NVMe Ctrl (12341 ): 25062 I/Os completed (+3516) 00:12:11.829 00:12:12.817 QEMU NVMe Ctrl (12340 ): 29583 I/Os completed (+3565) 00:12:12.817 QEMU NVMe Ctrl (12341 ): 28448 I/Os completed (+3386) 00:12:12.817 00:12:14.184 QEMU NVMe Ctrl (12340 ): 32880 I/Os completed (+3297) 00:12:14.184 QEMU NVMe Ctrl (12341 ): 31717 I/Os completed (+3269) 00:12:14.184 00:12:15.115 QEMU NVMe Ctrl (12340 ): 36295 I/Os completed (+3415) 00:12:15.115 QEMU NVMe Ctrl (12341 ): 35020 I/Os completed (+3303) 00:12:15.115 00:12:16.045 QEMU NVMe Ctrl (12340 ): 39879 I/Os completed (+3584) 00:12:16.045 QEMU NVMe Ctrl (12341 ): 38609 I/Os completed (+3589) 00:12:16.045 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:16.303 [2024-12-05 19:30:43.360026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:16.303 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:16.303 [2024-12-05 19:30:43.361068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 [2024-12-05 19:30:43.361139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 [2024-12-05 19:30:43.361161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 [2024-12-05 19:30:43.361180] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:16.303 [2024-12-05 19:30:43.362917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 [2024-12-05 19:30:43.362965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 [2024-12-05 19:30:43.362986] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 [2024-12-05 19:30:43.363003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:16.303 [2024-12-05 19:30:43.382055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:16.303 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:16.303 [2024-12-05 19:30:43.382974] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 [2024-12-05 19:30:43.383021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 [2024-12-05 19:30:43.383043] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 [2024-12-05 19:30:43.383063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:16.303 [2024-12-05 19:30:43.384475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 [2024-12-05 19:30:43.384514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 [2024-12-05 19:30:43.384535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 [2024-12-05 19:30:43.384550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.303 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:16.303 EAL: Scan for (pci) bus failed. 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:16.303 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:16.303 Attaching to 0000:00:10.0 00:12:16.303 Attached to 0000:00:10.0 00:12:16.561 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:16.561 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:16.561 19:30:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:16.561 Attaching to 0000:00:11.0 00:12:16.561 Attached to 0000:00:11.0 00:12:16.561 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:16.561 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:16.561 [2024-12-05 19:30:43.627544] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:28.768 19:30:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:28.768 19:30:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:28.768 19:30:55 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.80 00:12:28.768 19:30:55 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.80 00:12:28.768 19:30:55 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:28.768 19:30:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.80 00:12:28.768 19:30:55 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.80 2 00:12:28.768 remove_attach_helper took 42.80s to complete (handling 2 nvme drive(s)) 19:30:55 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:35.331 19:31:01 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67091 00:12:35.331 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67091) - No such process 00:12:35.331 19:31:01 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67091 00:12:35.331 19:31:01 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:35.331 19:31:01 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:35.331 19:31:01 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:35.331 19:31:01 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67644 00:12:35.331 19:31:01 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:35.331 19:31:01 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:35.331 19:31:01 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67644 00:12:35.331 19:31:01 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67644 ']' 00:12:35.331 19:31:01 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.331 19:31:01 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.331 19:31:01 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.331 19:31:01 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.331 19:31:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:35.331 [2024-12-05 19:31:01.707680] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:12:35.331 [2024-12-05 19:31:01.707801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67644 ] 00:12:35.331 [2024-12-05 19:31:01.863358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.331 [2024-12-05 19:31:01.958276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.331 19:31:02 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.331 19:31:02 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:12:35.331 19:31:02 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:35.331 19:31:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.331 19:31:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:35.331 19:31:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.331 19:31:02 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:35.331 19:31:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:35.331 19:31:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:35.331 19:31:02 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:35.331 19:31:02 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:35.331 19:31:02 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:35.331 19:31:02 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:35.331 19:31:02 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:35.331 19:31:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:35.331 19:31:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:35.331 19:31:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:35.331 19:31:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:35.331 19:31:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.890 19:31:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.890 19:31:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.890 19:31:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:41.890 19:31:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:41.890 [2024-12-05 19:31:08.632992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:41.890 [2024-12-05 19:31:08.634311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.890 [2024-12-05 19:31:08.634350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.890 [2024-12-05 19:31:08.634363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.891 [2024-12-05 19:31:08.634381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.891 [2024-12-05 19:31:08.634389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.891 [2024-12-05 19:31:08.634397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.891 [2024-12-05 19:31:08.634405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.891 [2024-12-05 19:31:08.634413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.891 [2024-12-05 19:31:08.634420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.891 [2024-12-05 19:31:08.634431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.891 [2024-12-05 19:31:08.634438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.891 [2024-12-05 19:31:08.634446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.891 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:41.891 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:41.891 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:41.891 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.891 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.891 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.891 19:31:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.891 19:31:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.891 19:31:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.150 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:42.150 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:42.150 [2024-12-05 19:31:09.233015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:42.150 [2024-12-05 19:31:09.234461] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.150 [2024-12-05 19:31:09.234498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.150 [2024-12-05 19:31:09.234511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.150 [2024-12-05 19:31:09.234527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.150 [2024-12-05 19:31:09.234537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.150 [2024-12-05 19:31:09.234544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.150 [2024-12-05 19:31:09.234553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.150 [2024-12-05 19:31:09.234559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.150 [2024-12-05 19:31:09.234567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.150 [2024-12-05 19:31:09.234574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.150 [2024-12-05 19:31:09.234583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.150 [2024-12-05 19:31:09.234590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.468 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:42.468 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:42.468 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:42.468 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:42.468 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:42.468 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:42.468 19:31:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.468 19:31:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:42.468 19:31:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.468 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:42.468 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:42.741 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:42.741 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:42.741 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:42.741 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:42.741 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:42.741 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:42.741 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:42.742 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:42.742 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:42.742 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:42.742 19:31:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:54.951 19:31:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.951 19:31:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.951 19:31:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:54.951 19:31:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:54.951 19:31:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.951 19:31:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.951 19:31:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.951 19:31:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:54.951 19:31:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:54.951 [2024-12-05 19:31:22.033191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:54.951 [2024-12-05 19:31:22.034524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.951 [2024-12-05 19:31:22.034557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.951 [2024-12-05 19:31:22.034568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.951 [2024-12-05 19:31:22.034586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.952 [2024-12-05 19:31:22.034593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.952 [2024-12-05 19:31:22.034602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.952 [2024-12-05 19:31:22.034609] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.952 [2024-12-05 19:31:22.034617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.952 [2024-12-05 19:31:22.034624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.952 [2024-12-05 19:31:22.034632] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.952 [2024-12-05 19:31:22.034639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.952 [2024-12-05 19:31:22.034647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.517 19:31:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:55.517 19:31:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:55.517 19:31:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:55.517 19:31:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:55.517 19:31:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:55.517 19:31:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:55.517 19:31:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.517 19:31:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:55.517 19:31:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.517 19:31:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:55.517 19:31:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:55.517 [2024-12-05 19:31:22.733182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:55.517 [2024-12-05 19:31:22.734358] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.517 [2024-12-05 19:31:22.734391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.517 [2024-12-05 19:31:22.734405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.517 [2024-12-05 19:31:22.734420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.517 [2024-12-05 19:31:22.734429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.517 [2024-12-05 19:31:22.734437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.517 [2024-12-05 19:31:22.734445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.517 [2024-12-05 19:31:22.734451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.517 [2024-12-05 19:31:22.734459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.517 [2024-12-05 19:31:22.734470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.517 [2024-12-05 19:31:22.734478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.517 [2024-12-05 19:31:22.734484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:56.084 19:31:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.084 19:31:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:56.084 19:31:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:56.084 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:56.342 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:56.342 19:31:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:08.579 19:31:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.579 19:31:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:08.579 19:31:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:08.579 19:31:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.579 19:31:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:08.579 19:31:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.579 [2024-12-05 19:31:35.433383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:08.579 [2024-12-05 19:31:35.434701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.579 [2024-12-05 19:31:35.434752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.579 [2024-12-05 19:31:35.434764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.579 [2024-12-05 19:31:35.434780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.579 [2024-12-05 19:31:35.434788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.579 [2024-12-05 19:31:35.434797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.579 [2024-12-05 19:31:35.434804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.579 [2024-12-05 19:31:35.434812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.579 [2024-12-05 19:31:35.434818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.579 [2024-12-05 19:31:35.434827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.579 [2024-12-05 19:31:35.434833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.579 [2024-12-05 19:31:35.434841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:08.579 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:08.837 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:08.837 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:08.837 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:08.837 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:08.837 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:08.837 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:08.837 19:31:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.837 19:31:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:08.837 19:31:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.837 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:08.837 19:31:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:08.837 [2024-12-05 19:31:36.033407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:08.837 [2024-12-05 19:31:36.034761] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.837 [2024-12-05 19:31:36.034793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.837 [2024-12-05 19:31:36.034805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.837 [2024-12-05 19:31:36.034820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.837 [2024-12-05 19:31:36.034829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.837 [2024-12-05 19:31:36.034836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.837 [2024-12-05 19:31:36.034845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.837 [2024-12-05 19:31:36.034852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.837 [2024-12-05 19:31:36.034861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.837 [2024-12-05 19:31:36.034868] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.837 [2024-12-05 19:31:36.034876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.837 [2024-12-05 19:31:36.034883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.403 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:09.403 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:09.403 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:09.403 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:09.403 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:09.403 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:09.403 19:31:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.403 19:31:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.403 19:31:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.403 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:09.403 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:09.403 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:09.403 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:09.403 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:09.661 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:09.661 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:09.661 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:09.661 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:09.661 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:09.661 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:09.661 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:09.661 19:31:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@719 -- # time=46.26 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@720 -- # echo 46.26 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.26 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.26 2 00:13:21.882 remove_attach_helper took 46.26s to complete (handling 2 nvme drive(s)) 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:21.882 19:31:48 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:21.882 19:31:48 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:28.459 19:31:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.459 19:31:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:28.459 19:31:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:28.459 19:31:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:28.459 [2024-12-05 19:31:54.917595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:28.459 [2024-12-05 19:31:54.918654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.459 [2024-12-05 19:31:54.918702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.459 [2024-12-05 19:31:54.918713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.459 [2024-12-05 19:31:54.918732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.459 [2024-12-05 19:31:54.918739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.459 [2024-12-05 19:31:54.918747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.459 [2024-12-05 19:31:54.918754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.459 [2024-12-05 19:31:54.918762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.459 [2024-12-05 19:31:54.918769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.459 [2024-12-05 19:31:54.918777] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.459 [2024-12-05 19:31:54.918784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.459 [2024-12-05 19:31:54.918796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.459 [2024-12-05 19:31:55.317599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:28.459 [2024-12-05 19:31:55.318617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.459 [2024-12-05 19:31:55.318650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.459 [2024-12-05 19:31:55.318661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.459 [2024-12-05 19:31:55.318685] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.459 [2024-12-05 19:31:55.318694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.459 [2024-12-05 19:31:55.318702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.459 [2024-12-05 19:31:55.318711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.459 [2024-12-05 19:31:55.318718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.459 [2024-12-05 19:31:55.318726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.459 [2024-12-05 19:31:55.318733] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.459 [2024-12-05 19:31:55.318740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.459 [2024-12-05 19:31:55.318747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:28.459 19:31:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.459 19:31:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:28.459 19:31:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:28.459 19:31:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:40.655 19:32:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.655 19:32:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:40.655 19:32:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:40.655 [2024-12-05 19:32:07.717801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:40.655 [2024-12-05 19:32:07.718949] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.655 [2024-12-05 19:32:07.718981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.655 [2024-12-05 19:32:07.718992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.655 [2024-12-05 19:32:07.719009] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.655 [2024-12-05 19:32:07.719017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.655 [2024-12-05 19:32:07.719025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.655 [2024-12-05 19:32:07.719033] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.655 [2024-12-05 19:32:07.719041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.655 [2024-12-05 19:32:07.719047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.655 [2024-12-05 19:32:07.719058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:40.655 [2024-12-05 19:32:07.719064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.655 [2024-12-05 19:32:07.719072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:40.655 19:32:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.655 19:32:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:40.655 19:32:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:40.655 19:32:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:41.220 [2024-12-05 19:32:08.217807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:41.220 [2024-12-05 19:32:08.218815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.220 [2024-12-05 19:32:08.218845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.220 [2024-12-05 19:32:08.218857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.220 [2024-12-05 19:32:08.218872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.220 [2024-12-05 19:32:08.218883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.220 [2024-12-05 19:32:08.218889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.220 [2024-12-05 19:32:08.218899] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.220 [2024-12-05 19:32:08.218905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.220 [2024-12-05 19:32:08.218913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.220 [2024-12-05 19:32:08.218921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.220 [2024-12-05 19:32:08.218929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.220 [2024-12-05 19:32:08.218936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:41.220 19:32:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:41.220 19:32:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:41.220 19:32:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:41.220 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:41.477 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:41.477 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:41.477 19:32:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:53.679 19:32:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.679 19:32:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:53.679 19:32:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:53.679 [2024-12-05 19:32:20.618026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:53.679 [2024-12-05 19:32:20.619276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.679 [2024-12-05 19:32:20.619308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.679 [2024-12-05 19:32:20.619321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.679 [2024-12-05 19:32:20.619337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.679 [2024-12-05 19:32:20.619344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.679 [2024-12-05 19:32:20.619355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.679 [2024-12-05 19:32:20.619362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.679 [2024-12-05 19:32:20.619372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.679 [2024-12-05 19:32:20.619378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.679 [2024-12-05 19:32:20.619387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.679 [2024-12-05 19:32:20.619393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.679 [2024-12-05 19:32:20.619401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:53.679 19:32:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.679 19:32:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:53.679 19:32:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:53.679 19:32:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:53.938 [2024-12-05 19:32:21.018022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:53.938 [2024-12-05 19:32:21.018983] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.938 [2024-12-05 19:32:21.019015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.938 [2024-12-05 19:32:21.019028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.938 [2024-12-05 19:32:21.019043] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.938 [2024-12-05 19:32:21.019051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.938 [2024-12-05 19:32:21.019058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.938 [2024-12-05 19:32:21.019067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.938 [2024-12-05 19:32:21.019073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.938 [2024-12-05 19:32:21.019083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.938 [2024-12-05 19:32:21.019090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.938 [2024-12-05 19:32:21.019101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.938 [2024-12-05 19:32:21.019108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.938 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:53.938 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:53.938 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:53.938 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:53.938 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:53.938 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:53.938 19:32:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.938 19:32:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:53.938 19:32:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.197 19:32:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:06.395 19:32:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:06.395 19:32:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:06.395 19:32:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:06.395 19:32:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:06.395 19:32:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:06.395 19:32:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.395 19:32:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:06.395 19:32:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.63 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.63 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:06.395 19:32:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.63 00:14:06.395 19:32:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.63 2 00:14:06.395 remove_attach_helper took 44.63s to complete (handling 2 nvme drive(s)) 19:32:33 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:06.395 19:32:33 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67644 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67644 ']' 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67644 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67644 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.395 killing process with pid 67644 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67644' 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67644 00:14:06.395 19:32:33 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67644 00:14:07.798 19:32:34 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:07.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:08.369 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:08.369 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:08.369 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.369 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.369 00:14:08.369 real 2m30.044s 00:14:08.369 user 1m52.281s 00:14:08.369 sys 0m16.498s 00:14:08.369 19:32:35 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.369 19:32:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:08.369 ************************************ 00:14:08.369 END TEST sw_hotplug 00:14:08.369 ************************************ 00:14:08.632 19:32:35 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:08.632 19:32:35 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:08.632 19:32:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:08.632 19:32:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.632 19:32:35 -- common/autotest_common.sh@10 -- # set +x 00:14:08.632 ************************************ 00:14:08.632 START TEST nvme_xnvme 00:14:08.632 ************************************ 00:14:08.632 19:32:35 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:08.632 * Looking for test storage... 00:14:08.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:08.632 19:32:35 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:08.632 19:32:35 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:14:08.632 19:32:35 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:08.632 19:32:35 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.632 19:32:35 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:08.632 19:32:35 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.632 19:32:35 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:08.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.632 --rc genhtml_branch_coverage=1 00:14:08.632 --rc genhtml_function_coverage=1 00:14:08.632 --rc genhtml_legend=1 00:14:08.632 --rc geninfo_all_blocks=1 00:14:08.632 --rc geninfo_unexecuted_blocks=1 00:14:08.632 00:14:08.632 ' 00:14:08.632 19:32:35 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:08.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.632 --rc genhtml_branch_coverage=1 00:14:08.632 --rc genhtml_function_coverage=1 00:14:08.632 --rc genhtml_legend=1 00:14:08.632 --rc geninfo_all_blocks=1 00:14:08.632 --rc geninfo_unexecuted_blocks=1 00:14:08.632 00:14:08.632 ' 00:14:08.632 19:32:35 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:08.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.632 --rc genhtml_branch_coverage=1 00:14:08.632 --rc genhtml_function_coverage=1 00:14:08.632 --rc genhtml_legend=1 00:14:08.632 --rc geninfo_all_blocks=1 00:14:08.632 --rc geninfo_unexecuted_blocks=1 00:14:08.632 00:14:08.632 ' 00:14:08.633 19:32:35 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:08.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.633 --rc genhtml_branch_coverage=1 00:14:08.633 --rc genhtml_function_coverage=1 00:14:08.633 --rc genhtml_legend=1 00:14:08.633 --rc geninfo_all_blocks=1 00:14:08.633 --rc geninfo_unexecuted_blocks=1 00:14:08.633 00:14:08.633 ' 00:14:08.633 19:32:35 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:14:08.633 19:32:35 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:14:08.633 19:32:35 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:08.633 19:32:35 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:14:08.633 19:32:35 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:08.633 19:32:35 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:08.633 19:32:35 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:08.633 19:32:35 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:14:08.633 19:32:35 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:08.633 19:32:35 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:08.633 19:32:35 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:08.633 19:32:35 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:14:08.633 19:32:35 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:08.633 #define SPDK_CONFIG_H 00:14:08.633 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:08.633 #define SPDK_CONFIG_APPS 1 00:14:08.633 #define SPDK_CONFIG_ARCH native 00:14:08.633 #define SPDK_CONFIG_ASAN 1 00:14:08.633 #undef SPDK_CONFIG_AVAHI 00:14:08.633 #undef SPDK_CONFIG_CET 00:14:08.633 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:08.633 #define SPDK_CONFIG_COVERAGE 1 00:14:08.633 #define SPDK_CONFIG_CROSS_PREFIX 00:14:08.633 #undef SPDK_CONFIG_CRYPTO 00:14:08.633 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:08.633 #undef SPDK_CONFIG_CUSTOMOCF 00:14:08.633 #undef SPDK_CONFIG_DAOS 00:14:08.634 #define SPDK_CONFIG_DAOS_DIR 00:14:08.634 #define SPDK_CONFIG_DEBUG 1 00:14:08.634 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:08.634 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:08.634 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:08.634 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:08.634 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:08.634 #undef SPDK_CONFIG_DPDK_UADK 00:14:08.634 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:08.634 #define SPDK_CONFIG_EXAMPLES 1 00:14:08.634 #undef SPDK_CONFIG_FC 00:14:08.634 #define SPDK_CONFIG_FC_PATH 00:14:08.634 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:08.634 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:08.634 #define SPDK_CONFIG_FSDEV 1 00:14:08.634 #undef SPDK_CONFIG_FUSE 00:14:08.634 #undef SPDK_CONFIG_FUZZER 00:14:08.634 #define SPDK_CONFIG_FUZZER_LIB 00:14:08.634 #undef SPDK_CONFIG_GOLANG 00:14:08.634 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:08.634 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:08.634 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:08.634 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:08.634 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:08.634 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:08.634 #undef SPDK_CONFIG_HAVE_LZ4 00:14:08.634 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:08.634 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:08.634 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:08.634 #define SPDK_CONFIG_IDXD 1 00:14:08.634 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:08.634 #undef SPDK_CONFIG_IPSEC_MB 00:14:08.634 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:08.634 #define SPDK_CONFIG_ISAL 1 00:14:08.634 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:08.634 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:08.634 #define SPDK_CONFIG_LIBDIR 00:14:08.634 #undef SPDK_CONFIG_LTO 00:14:08.634 #define SPDK_CONFIG_MAX_LCORES 128 00:14:08.634 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:08.634 #define SPDK_CONFIG_NVME_CUSE 1 00:14:08.634 #undef SPDK_CONFIG_OCF 00:14:08.634 #define SPDK_CONFIG_OCF_PATH 00:14:08.634 #define SPDK_CONFIG_OPENSSL_PATH 00:14:08.634 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:08.634 #define SPDK_CONFIG_PGO_DIR 00:14:08.634 #undef SPDK_CONFIG_PGO_USE 00:14:08.634 #define SPDK_CONFIG_PREFIX /usr/local 00:14:08.634 #undef SPDK_CONFIG_RAID5F 00:14:08.634 #undef SPDK_CONFIG_RBD 00:14:08.634 #define SPDK_CONFIG_RDMA 1 00:14:08.634 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:08.634 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:08.634 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:08.634 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:08.634 #define SPDK_CONFIG_SHARED 1 00:14:08.634 #undef SPDK_CONFIG_SMA 00:14:08.634 #define SPDK_CONFIG_TESTS 1 00:14:08.634 #undef SPDK_CONFIG_TSAN 00:14:08.634 #define SPDK_CONFIG_UBLK 1 00:14:08.634 #define SPDK_CONFIG_UBSAN 1 00:14:08.634 #undef SPDK_CONFIG_UNIT_TESTS 00:14:08.634 #undef SPDK_CONFIG_URING 00:14:08.634 #define SPDK_CONFIG_URING_PATH 00:14:08.634 #undef SPDK_CONFIG_URING_ZNS 00:14:08.634 #undef SPDK_CONFIG_USDT 00:14:08.634 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:08.634 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:08.634 #undef SPDK_CONFIG_VFIO_USER 00:14:08.634 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:08.634 #define SPDK_CONFIG_VHOST 1 00:14:08.634 #define SPDK_CONFIG_VIRTIO 1 00:14:08.634 #undef SPDK_CONFIG_VTUNE 00:14:08.634 #define SPDK_CONFIG_VTUNE_DIR 00:14:08.634 #define SPDK_CONFIG_WERROR 1 00:14:08.634 #define SPDK_CONFIG_WPDK_DIR 00:14:08.634 #define SPDK_CONFIG_XNVME 1 00:14:08.634 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:08.634 19:32:35 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:08.634 19:32:35 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.634 19:32:35 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.634 19:32:35 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.634 19:32:35 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.634 19:32:35 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.634 19:32:35 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.634 19:32:35 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.634 19:32:35 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:08.634 19:32:35 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@68 -- # uname -s 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:14:08.634 19:32:35 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:14:08.634 19:32:35 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:08.635 19:32:35 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 69014 ]] 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 69014 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:14:08.636 19:32:35 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:14:08.898 19:32:35 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.46eXfo 00:14:08.898 19:32:35 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:08.898 19:32:35 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:14:08.898 19:32:35 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:14:08.898 19:32:35 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.46eXfo/tests/xnvme /tmp/spdk.46eXfo 00:14:08.898 19:32:35 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:14:08.898 19:32:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:08.898 19:32:35 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:14:08.898 19:32:35 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974552576 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593661440 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974552576 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593661440 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265249792 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96495558656 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3207221248 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:14:08.899 * Looking for test storage... 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974552576 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:08.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.899 19:32:35 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.899 19:32:35 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:08.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.900 --rc genhtml_branch_coverage=1 00:14:08.900 --rc genhtml_function_coverage=1 00:14:08.900 --rc genhtml_legend=1 00:14:08.900 --rc geninfo_all_blocks=1 00:14:08.900 --rc geninfo_unexecuted_blocks=1 00:14:08.900 00:14:08.900 ' 00:14:08.900 19:32:35 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:08.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.900 --rc genhtml_branch_coverage=1 00:14:08.900 --rc genhtml_function_coverage=1 00:14:08.900 --rc genhtml_legend=1 00:14:08.900 --rc geninfo_all_blocks=1 00:14:08.900 --rc geninfo_unexecuted_blocks=1 00:14:08.900 00:14:08.900 ' 00:14:08.900 19:32:35 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:08.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.900 --rc genhtml_branch_coverage=1 00:14:08.900 --rc genhtml_function_coverage=1 00:14:08.900 --rc genhtml_legend=1 00:14:08.900 --rc geninfo_all_blocks=1 00:14:08.900 --rc geninfo_unexecuted_blocks=1 00:14:08.900 00:14:08.900 ' 00:14:08.900 19:32:35 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:08.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.900 --rc genhtml_branch_coverage=1 00:14:08.900 --rc genhtml_function_coverage=1 00:14:08.900 --rc genhtml_legend=1 00:14:08.900 --rc geninfo_all_blocks=1 00:14:08.900 --rc geninfo_unexecuted_blocks=1 00:14:08.900 00:14:08.900 ' 00:14:08.900 19:32:35 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:08.900 19:32:35 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.900 19:32:35 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.900 19:32:35 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.900 19:32:35 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.900 19:32:35 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.900 19:32:35 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.900 19:32:35 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.900 19:32:35 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:08.900 19:32:35 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:14:08.900 19:32:35 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:09.158 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:09.418 Waiting for block devices as requested 00:14:09.418 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:09.418 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:09.418 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:09.679 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:14.966 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:14.966 19:32:41 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:14:14.966 19:32:42 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:14:14.966 19:32:42 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:14:15.229 19:32:42 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:14:15.229 19:32:42 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:15.229 No valid GPT data, bailing 00:14:15.229 19:32:42 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:15.229 19:32:42 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:14:15.229 19:32:42 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:15.229 19:32:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:15.229 19:32:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:15.229 19:32:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.229 19:32:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:15.229 ************************************ 00:14:15.229 START TEST xnvme_rpc 00:14:15.229 ************************************ 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69400 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69400 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69400 ']' 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.229 19:32:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:15.491 [2024-12-05 19:32:42.518513] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:14:15.491 [2024-12-05 19:32:42.518635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69400 ] 00:14:15.491 [2024-12-05 19:32:42.674650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.752 [2024-12-05 19:32:42.775453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.322 xnvme_bdev 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69400 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69400 ']' 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69400 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69400 00:14:16.322 killing process with pid 69400 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69400' 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69400 00:14:16.322 19:32:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69400 00:14:18.230 00:14:18.230 real 0m2.632s 00:14:18.230 user 0m2.692s 00:14:18.230 sys 0m0.376s 00:14:18.230 19:32:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.230 19:32:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.230 ************************************ 00:14:18.230 END TEST xnvme_rpc 00:14:18.230 ************************************ 00:14:18.230 19:32:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:18.230 19:32:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:18.230 19:32:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.230 19:32:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.230 ************************************ 00:14:18.230 START TEST xnvme_bdevperf 00:14:18.230 ************************************ 00:14:18.230 19:32:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:18.230 19:32:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:18.230 19:32:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:18.230 19:32:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:18.230 19:32:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:18.230 19:32:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:18.230 19:32:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:18.230 19:32:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:18.230 { 00:14:18.230 "subsystems": [ 00:14:18.230 { 00:14:18.230 "subsystem": "bdev", 00:14:18.230 "config": [ 00:14:18.230 { 00:14:18.230 "params": { 00:14:18.230 "io_mechanism": "libaio", 00:14:18.230 "conserve_cpu": false, 00:14:18.230 "filename": "/dev/nvme0n1", 00:14:18.230 "name": "xnvme_bdev" 00:14:18.230 }, 00:14:18.230 "method": "bdev_xnvme_create" 00:14:18.230 }, 00:14:18.230 { 00:14:18.230 "method": "bdev_wait_for_examine" 00:14:18.230 } 00:14:18.230 ] 00:14:18.230 } 00:14:18.230 ] 00:14:18.230 } 00:14:18.230 [2024-12-05 19:32:45.203290] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:14:18.230 [2024-12-05 19:32:45.203407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69474 ] 00:14:18.230 [2024-12-05 19:32:45.363887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.230 [2024-12-05 19:32:45.463396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.490 Running I/O for 5 seconds... 00:14:20.820 31740.00 IOPS, 123.98 MiB/s [2024-12-05T19:32:49.019Z] 32225.50 IOPS, 125.88 MiB/s [2024-12-05T19:32:49.963Z] 32687.00 IOPS, 127.68 MiB/s [2024-12-05T19:32:50.907Z] 32896.75 IOPS, 128.50 MiB/s 00:14:23.653 Latency(us) 00:14:23.653 [2024-12-05T19:32:50.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.653 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:23.653 xnvme_bdev : 5.00 32614.02 127.40 0.00 0.00 1957.63 198.50 7965.14 00:14:23.653 [2024-12-05T19:32:50.908Z] =================================================================================================================== 00:14:23.653 [2024-12-05T19:32:50.908Z] Total : 32614.02 127.40 0.00 0.00 1957.63 198.50 7965.14 00:14:24.223 19:32:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:24.223 19:32:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:24.223 19:32:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:24.223 19:32:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:24.223 19:32:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:24.485 { 00:14:24.485 "subsystems": [ 00:14:24.485 { 00:14:24.485 "subsystem": "bdev", 00:14:24.485 "config": [ 00:14:24.485 { 00:14:24.485 "params": { 00:14:24.485 "io_mechanism": "libaio", 00:14:24.485 "conserve_cpu": false, 00:14:24.485 "filename": "/dev/nvme0n1", 00:14:24.485 "name": "xnvme_bdev" 00:14:24.485 }, 00:14:24.485 "method": "bdev_xnvme_create" 00:14:24.485 }, 00:14:24.485 { 00:14:24.485 "method": "bdev_wait_for_examine" 00:14:24.485 } 00:14:24.485 ] 00:14:24.485 } 00:14:24.485 ] 00:14:24.485 } 00:14:24.485 [2024-12-05 19:32:51.546196] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:14:24.485 [2024-12-05 19:32:51.546517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69549 ] 00:14:24.485 [2024-12-05 19:32:51.722810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.761 [2024-12-05 19:32:51.825545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.029 Running I/O for 5 seconds... 00:14:26.915 37562.00 IOPS, 146.73 MiB/s [2024-12-05T19:32:55.112Z] 37323.00 IOPS, 145.79 MiB/s [2024-12-05T19:32:56.496Z] 37102.33 IOPS, 144.93 MiB/s [2024-12-05T19:32:57.441Z] 36900.50 IOPS, 144.14 MiB/s 00:14:30.186 Latency(us) 00:14:30.186 [2024-12-05T19:32:57.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.186 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:30.186 xnvme_bdev : 5.00 37021.17 144.61 0.00 0.00 1724.13 172.50 7007.31 00:14:30.186 [2024-12-05T19:32:57.441Z] =================================================================================================================== 00:14:30.186 [2024-12-05T19:32:57.441Z] Total : 37021.17 144.61 0.00 0.00 1724.13 172.50 7007.31 00:14:30.758 ************************************ 00:14:30.758 END TEST xnvme_bdevperf 00:14:30.758 ************************************ 00:14:30.758 00:14:30.758 real 0m12.692s 00:14:30.758 user 0m4.788s 00:14:30.758 sys 0m6.112s 00:14:30.758 19:32:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.758 19:32:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:30.758 19:32:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:30.758 19:32:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:30.758 19:32:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.758 19:32:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:30.758 ************************************ 00:14:30.758 START TEST xnvme_fio_plugin 00:14:30.758 ************************************ 00:14:30.758 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:30.758 19:32:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:30.758 19:32:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:30.758 19:32:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:30.758 19:32:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:30.758 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:30.758 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:30.758 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:30.758 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:30.759 19:32:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:30.759 { 00:14:30.759 "subsystems": [ 00:14:30.759 { 00:14:30.759 "subsystem": "bdev", 00:14:30.759 "config": [ 00:14:30.759 { 00:14:30.759 "params": { 00:14:30.759 "io_mechanism": "libaio", 00:14:30.759 "conserve_cpu": false, 00:14:30.759 "filename": "/dev/nvme0n1", 00:14:30.759 "name": "xnvme_bdev" 00:14:30.759 }, 00:14:30.759 "method": "bdev_xnvme_create" 00:14:30.759 }, 00:14:30.759 { 00:14:30.759 "method": "bdev_wait_for_examine" 00:14:30.759 } 00:14:30.759 ] 00:14:30.759 } 00:14:30.759 ] 00:14:30.759 } 00:14:31.020 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:31.020 fio-3.35 00:14:31.020 Starting 1 thread 00:14:37.607 00:14:37.607 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69663: Thu Dec 5 19:33:03 2024 00:14:37.607 read: IOPS=35.4k, BW=138MiB/s (145MB/s)(691MiB/5002msec) 00:14:37.607 slat (usec): min=4, max=2107, avg=22.19, stdev=83.70 00:14:37.607 clat (usec): min=27, max=10905, avg=1256.84, stdev=612.52 00:14:37.607 lat (usec): min=97, max=10910, avg=1279.04, stdev=608.85 00:14:37.607 clat percentiles (usec): 00:14:37.607 | 1.00th=[ 231], 5.00th=[ 424], 10.00th=[ 562], 20.00th=[ 750], 00:14:37.607 | 30.00th=[ 906], 40.00th=[ 1037], 50.00th=[ 1172], 60.00th=[ 1336], 00:14:37.607 | 70.00th=[ 1500], 80.00th=[ 1713], 90.00th=[ 2024], 95.00th=[ 2311], 00:14:37.607 | 99.00th=[ 3064], 99.50th=[ 3425], 99.90th=[ 5080], 99.95th=[ 6128], 00:14:37.607 | 99.99th=[ 8029] 00:14:37.607 bw ( KiB/s): min=123984, max=155752, per=98.45%, avg=139260.44, stdev=12838.91, samples=9 00:14:37.607 iops : min=30996, max=38938, avg=34815.56, stdev=3209.24, samples=9 00:14:37.607 lat (usec) : 50=0.01%, 100=0.02%, 250=1.32%, 500=6.02%, 750=12.71% 00:14:37.607 lat (usec) : 1000=17.11% 00:14:37.607 lat (msec) : 2=52.40%, 4=10.24%, 10=0.18%, 20=0.01% 00:14:37.607 cpu : usr=34.01%, sys=56.03%, ctx=9, majf=0, minf=764 00:14:37.607 IO depths : 1=0.2%, 2=0.5%, 4=1.6%, 8=5.9%, 16=22.2%, 32=67.3%, >=64=2.4% 00:14:37.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.607 complete : 0=0.0%, 4=97.8%, 8=0.1%, 16=0.1%, 32=0.4%, 64=1.7%, >=64=0.0% 00:14:37.607 issued rwts: total=176881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.607 00:14:37.607 Run status group 0 (all jobs): 00:14:37.607 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=691MiB (725MB), run=5002-5002msec 00:14:37.607 ----------------------------------------------------- 00:14:37.607 Suppressions used: 00:14:37.607 count bytes template 00:14:37.607 1 11 /usr/src/fio/parse.c 00:14:37.607 1 8 libtcmalloc_minimal.so 00:14:37.607 1 904 libcrypto.so 00:14:37.607 ----------------------------------------------------- 00:14:37.607 00:14:37.607 19:33:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:37.607 19:33:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:37.607 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:37.607 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:37.608 19:33:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:37.608 { 00:14:37.608 "subsystems": [ 00:14:37.608 { 00:14:37.608 "subsystem": "bdev", 00:14:37.608 "config": [ 00:14:37.608 { 00:14:37.608 "params": { 00:14:37.608 "io_mechanism": "libaio", 00:14:37.608 "conserve_cpu": false, 00:14:37.608 "filename": "/dev/nvme0n1", 00:14:37.608 "name": "xnvme_bdev" 00:14:37.608 }, 00:14:37.608 "method": "bdev_xnvme_create" 00:14:37.608 }, 00:14:37.608 { 00:14:37.608 "method": "bdev_wait_for_examine" 00:14:37.608 } 00:14:37.608 ] 00:14:37.608 } 00:14:37.608 ] 00:14:37.608 } 00:14:37.866 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:37.866 fio-3.35 00:14:37.866 Starting 1 thread 00:14:44.451 00:14:44.451 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69749: Thu Dec 5 19:33:10 2024 00:14:44.451 write: IOPS=36.7k, BW=143MiB/s (150MB/s)(716MiB/5001msec); 0 zone resets 00:14:44.451 slat (usec): min=4, max=2029, avg=20.02, stdev=64.61 00:14:44.451 clat (usec): min=10, max=18825, avg=1205.33, stdev=1620.37 00:14:44.451 lat (usec): min=48, max=18838, avg=1225.34, stdev=1618.51 00:14:44.451 clat percentiles (usec): 00:14:44.451 | 1.00th=[ 196], 5.00th=[ 318], 10.00th=[ 429], 20.00th=[ 594], 00:14:44.451 | 30.00th=[ 734], 40.00th=[ 865], 50.00th=[ 979], 60.00th=[ 1090], 00:14:44.451 | 70.00th=[ 1237], 80.00th=[ 1418], 90.00th=[ 1696], 95.00th=[ 2024], 00:14:44.451 | 99.00th=[13042], 99.50th=[14746], 99.90th=[16319], 99.95th=[16712], 00:14:44.451 | 99.99th=[17695] 00:14:44.451 bw ( KiB/s): min=41856, max=164144, per=98.55%, avg=144515.56, stdev=39351.31, samples=9 00:14:44.451 iops : min=10464, max=41036, avg=36128.89, stdev=9837.83, samples=9 00:14:44.451 lat (usec) : 20=0.01%, 50=0.01%, 100=0.11%, 250=2.13%, 500=11.47% 00:14:44.451 lat (usec) : 750=17.55%, 1000=20.75% 00:14:44.451 lat (msec) : 2=42.71%, 4=3.80%, 10=0.08%, 20=1.38% 00:14:44.451 cpu : usr=39.36%, sys=49.04%, ctx=14, majf=0, minf=765 00:14:44.451 IO depths : 1=0.2%, 2=0.8%, 4=3.0%, 8=9.3%, 16=24.0%, 32=59.9%, >=64=2.7% 00:14:44.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.451 complete : 0=0.0%, 4=97.9%, 8=0.2%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:14:44.451 issued rwts: total=0,183340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.451 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:44.451 00:14:44.451 Run status group 0 (all jobs): 00:14:44.451 WRITE: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=716MiB (751MB), run=5001-5001msec 00:14:44.451 ----------------------------------------------------- 00:14:44.451 Suppressions used: 00:14:44.451 count bytes template 00:14:44.451 1 11 /usr/src/fio/parse.c 00:14:44.451 1 8 libtcmalloc_minimal.so 00:14:44.451 1 904 libcrypto.so 00:14:44.451 ----------------------------------------------------- 00:14:44.451 00:14:44.451 00:14:44.451 real 0m13.555s 00:14:44.451 user 0m6.321s 00:14:44.451 sys 0m5.746s 00:14:44.451 19:33:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.451 ************************************ 00:14:44.451 END TEST xnvme_fio_plugin 00:14:44.451 ************************************ 00:14:44.451 19:33:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:44.451 19:33:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:44.451 19:33:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:44.451 19:33:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:44.451 19:33:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:44.451 19:33:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.451 19:33:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.451 19:33:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.451 ************************************ 00:14:44.451 START TEST xnvme_rpc 00:14:44.451 ************************************ 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:44.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69839 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69839 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69839 ']' 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.451 19:33:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:44.451 [2024-12-05 19:33:11.596314] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:14:44.451 [2024-12-05 19:33:11.596558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69839 ] 00:14:44.713 [2024-12-05 19:33:11.756509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.713 [2024-12-05 19:33:11.853769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.287 xnvme_bdev 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.287 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69839 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69839 ']' 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69839 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69839 00:14:45.549 killing process with pid 69839 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69839' 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69839 00:14:45.549 19:33:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69839 00:14:46.937 ************************************ 00:14:46.937 END TEST xnvme_rpc 00:14:46.937 ************************************ 00:14:46.937 00:14:46.937 real 0m2.619s 00:14:46.937 user 0m2.686s 00:14:46.937 sys 0m0.369s 00:14:46.937 19:33:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.937 19:33:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.937 19:33:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:46.937 19:33:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:46.937 19:33:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.937 19:33:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.199 ************************************ 00:14:47.199 START TEST xnvme_bdevperf 00:14:47.199 ************************************ 00:14:47.199 19:33:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:47.199 19:33:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:47.199 19:33:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:47.199 19:33:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:47.199 19:33:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:47.199 19:33:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:47.199 19:33:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:47.199 19:33:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:47.199 { 00:14:47.199 "subsystems": [ 00:14:47.199 { 00:14:47.199 "subsystem": "bdev", 00:14:47.199 "config": [ 00:14:47.199 { 00:14:47.199 "params": { 00:14:47.199 "io_mechanism": "libaio", 00:14:47.199 "conserve_cpu": true, 00:14:47.199 "filename": "/dev/nvme0n1", 00:14:47.199 "name": "xnvme_bdev" 00:14:47.199 }, 00:14:47.199 "method": "bdev_xnvme_create" 00:14:47.199 }, 00:14:47.199 { 00:14:47.199 "method": "bdev_wait_for_examine" 00:14:47.199 } 00:14:47.199 ] 00:14:47.199 } 00:14:47.199 ] 00:14:47.199 } 00:14:47.199 [2024-12-05 19:33:14.268443] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:14:47.199 [2024-12-05 19:33:14.268658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69912 ] 00:14:47.199 [2024-12-05 19:33:14.428226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.460 [2024-12-05 19:33:14.524138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.722 Running I/O for 5 seconds... 00:14:49.679 37982.00 IOPS, 148.37 MiB/s [2024-12-05T19:33:17.878Z] 36724.00 IOPS, 143.45 MiB/s [2024-12-05T19:33:18.819Z] 35835.00 IOPS, 139.98 MiB/s [2024-12-05T19:33:20.199Z] 35627.00 IOPS, 139.17 MiB/s [2024-12-05T19:33:20.199Z] 35719.20 IOPS, 139.53 MiB/s 00:14:52.944 Latency(us) 00:14:52.944 [2024-12-05T19:33:20.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.944 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:52.944 xnvme_bdev : 5.01 35668.38 139.33 0.00 0.00 1788.45 118.94 11846.89 00:14:52.944 [2024-12-05T19:33:20.199Z] =================================================================================================================== 00:14:52.944 [2024-12-05T19:33:20.199Z] Total : 35668.38 139.33 0.00 0.00 1788.45 118.94 11846.89 00:14:53.515 19:33:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:53.515 19:33:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:53.515 19:33:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:53.515 19:33:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:53.515 19:33:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:53.515 { 00:14:53.515 "subsystems": [ 00:14:53.515 { 00:14:53.515 "subsystem": "bdev", 00:14:53.515 "config": [ 00:14:53.515 { 00:14:53.515 "params": { 00:14:53.515 "io_mechanism": "libaio", 00:14:53.515 "conserve_cpu": true, 00:14:53.515 "filename": "/dev/nvme0n1", 00:14:53.515 "name": "xnvme_bdev" 00:14:53.515 }, 00:14:53.515 "method": "bdev_xnvme_create" 00:14:53.515 }, 00:14:53.515 { 00:14:53.515 "method": "bdev_wait_for_examine" 00:14:53.515 } 00:14:53.515 ] 00:14:53.515 } 00:14:53.515 ] 00:14:53.515 } 00:14:53.515 [2024-12-05 19:33:20.615424] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:14:53.515 [2024-12-05 19:33:20.615540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69983 ] 00:14:53.775 [2024-12-05 19:33:20.775872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.775 [2024-12-05 19:33:20.873296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.035 Running I/O for 5 seconds... 00:14:55.919 3845.00 IOPS, 15.02 MiB/s [2024-12-05T19:33:24.557Z] 3980.00 IOPS, 15.55 MiB/s [2024-12-05T19:33:25.515Z] 4044.00 IOPS, 15.80 MiB/s [2024-12-05T19:33:26.456Z] 4149.00 IOPS, 16.21 MiB/s [2024-12-05T19:33:26.456Z] 4225.40 IOPS, 16.51 MiB/s 00:14:59.201 Latency(us) 00:14:59.201 [2024-12-05T19:33:26.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.201 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:59.201 xnvme_bdev : 5.01 4229.31 16.52 0.00 0.00 15112.76 42.14 57671.68 00:14:59.201 [2024-12-05T19:33:26.456Z] =================================================================================================================== 00:14:59.201 [2024-12-05T19:33:26.456Z] Total : 4229.31 16.52 0.00 0.00 15112.76 42.14 57671.68 00:14:59.883 00:14:59.883 real 0m12.692s 00:14:59.883 user 0m8.158s 00:14:59.883 sys 0m3.353s 00:14:59.883 19:33:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.883 19:33:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:59.883 ************************************ 00:14:59.883 END TEST xnvme_bdevperf 00:14:59.883 ************************************ 00:14:59.883 19:33:26 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:59.883 19:33:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:59.883 19:33:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.883 19:33:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.883 ************************************ 00:14:59.883 START TEST xnvme_fio_plugin 00:14:59.883 ************************************ 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.883 19:33:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.883 { 00:14:59.883 "subsystems": [ 00:14:59.883 { 00:14:59.883 "subsystem": "bdev", 00:14:59.883 "config": [ 00:14:59.883 { 00:14:59.883 "params": { 00:14:59.883 "io_mechanism": "libaio", 00:14:59.883 "conserve_cpu": true, 00:14:59.883 "filename": "/dev/nvme0n1", 00:14:59.883 "name": "xnvme_bdev" 00:14:59.883 }, 00:14:59.883 "method": "bdev_xnvme_create" 00:14:59.883 }, 00:14:59.883 { 00:14:59.883 "method": "bdev_wait_for_examine" 00:14:59.883 } 00:14:59.883 ] 00:14:59.883 } 00:14:59.883 ] 00:14:59.883 } 00:15:00.144 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:00.144 fio-3.35 00:15:00.144 Starting 1 thread 00:15:06.734 00:15:06.734 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70105: Thu Dec 5 19:33:32 2024 00:15:06.734 read: IOPS=38.8k, BW=152MiB/s (159MB/s)(758MiB/5002msec) 00:15:06.734 slat (usec): min=4, max=1416, avg=20.77, stdev=75.84 00:15:06.734 clat (usec): min=104, max=8243, avg=1085.86, stdev=512.82 00:15:06.734 lat (usec): min=161, max=8257, avg=1106.63, stdev=508.90 00:15:06.734 clat percentiles (usec): 00:15:06.734 | 1.00th=[ 215], 5.00th=[ 367], 10.00th=[ 510], 20.00th=[ 660], 00:15:06.734 | 30.00th=[ 799], 40.00th=[ 914], 50.00th=[ 1029], 60.00th=[ 1139], 00:15:06.734 | 70.00th=[ 1287], 80.00th=[ 1450], 90.00th=[ 1713], 95.00th=[ 1991], 00:15:06.734 | 99.00th=[ 2704], 99.50th=[ 2999], 99.90th=[ 3818], 99.95th=[ 4293], 00:15:06.734 | 99.99th=[ 6063] 00:15:06.734 bw ( KiB/s): min=146280, max=165264, per=100.00%, avg=156596.44, stdev=6223.67, samples=9 00:15:06.734 iops : min=36570, max=41316, avg=39149.56, stdev=1555.04, samples=9 00:15:06.734 lat (usec) : 250=1.89%, 500=7.49%, 750=16.94%, 1000=21.39% 00:15:06.734 lat (msec) : 2=47.48%, 4=4.74%, 10=0.07% 00:15:06.734 cpu : usr=34.37%, sys=57.07%, ctx=10, majf=0, minf=764 00:15:06.734 IO depths : 1=0.3%, 2=0.9%, 4=3.0%, 8=9.0%, 16=24.3%, 32=60.5%, >=64=2.0% 00:15:06.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.734 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:15:06.734 issued rwts: total=194127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:06.734 00:15:06.734 Run status group 0 (all jobs): 00:15:06.734 READ: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=758MiB (795MB), run=5002-5002msec 00:15:06.734 ----------------------------------------------------- 00:15:06.734 Suppressions used: 00:15:06.734 count bytes template 00:15:06.734 1 11 /usr/src/fio/parse.c 00:15:06.734 1 8 libtcmalloc_minimal.so 00:15:06.734 1 904 libcrypto.so 00:15:06.734 ----------------------------------------------------- 00:15:06.734 00:15:06.734 19:33:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:06.735 19:33:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.735 { 00:15:06.735 "subsystems": [ 00:15:06.735 { 00:15:06.735 "subsystem": "bdev", 00:15:06.735 "config": [ 00:15:06.735 { 00:15:06.735 "params": { 00:15:06.735 "io_mechanism": "libaio", 00:15:06.735 "conserve_cpu": true, 00:15:06.735 "filename": "/dev/nvme0n1", 00:15:06.735 "name": "xnvme_bdev" 00:15:06.735 }, 00:15:06.735 "method": "bdev_xnvme_create" 00:15:06.735 }, 00:15:06.735 { 00:15:06.735 "method": "bdev_wait_for_examine" 00:15:06.735 } 00:15:06.735 ] 00:15:06.735 } 00:15:06.735 ] 00:15:06.735 } 00:15:06.735 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:06.735 fio-3.35 00:15:06.735 Starting 1 thread 00:15:13.324 00:15:13.324 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70197: Thu Dec 5 19:33:39 2024 00:15:13.324 write: IOPS=37.8k, BW=148MiB/s (155MB/s)(739MiB/5001msec); 0 zone resets 00:15:13.324 slat (usec): min=4, max=1821, avg=21.96, stdev=72.52 00:15:13.324 clat (usec): min=46, max=6537, avg=1082.59, stdev=541.24 00:15:13.324 lat (usec): min=151, max=6542, avg=1104.55, stdev=538.35 00:15:13.324 clat percentiles (usec): 00:15:13.324 | 1.00th=[ 210], 5.00th=[ 330], 10.00th=[ 465], 20.00th=[ 627], 00:15:13.324 | 30.00th=[ 766], 40.00th=[ 898], 50.00th=[ 1012], 60.00th=[ 1139], 00:15:13.324 | 70.00th=[ 1287], 80.00th=[ 1467], 90.00th=[ 1778], 95.00th=[ 2057], 00:15:13.324 | 99.00th=[ 2802], 99.50th=[ 3064], 99.90th=[ 3687], 99.95th=[ 4080], 00:15:13.324 | 99.99th=[ 5538] 00:15:13.324 bw ( KiB/s): min=139880, max=179832, per=100.00%, avg=151920.89, stdev=11765.14, samples=9 00:15:13.324 iops : min=34970, max=44958, avg=37980.22, stdev=2941.28, samples=9 00:15:13.324 lat (usec) : 50=0.01%, 100=0.01%, 250=2.19%, 500=9.71%, 750=16.81% 00:15:13.324 lat (usec) : 1000=20.34% 00:15:13.324 lat (msec) : 2=45.30%, 4=5.60%, 10=0.06% 00:15:13.324 cpu : usr=32.76%, sys=57.24%, ctx=14, majf=0, minf=765 00:15:13.324 IO depths : 1=0.2%, 2=0.9%, 4=3.3%, 8=9.7%, 16=25.0%, 32=59.0%, >=64=1.9% 00:15:13.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.324 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:13.324 issued rwts: total=0,189082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:13.324 00:15:13.324 Run status group 0 (all jobs): 00:15:13.324 WRITE: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=739MiB (774MB), run=5001-5001msec 00:15:13.324 ----------------------------------------------------- 00:15:13.324 Suppressions used: 00:15:13.324 count bytes template 00:15:13.324 1 11 /usr/src/fio/parse.c 00:15:13.324 1 8 libtcmalloc_minimal.so 00:15:13.324 1 904 libcrypto.so 00:15:13.324 ----------------------------------------------------- 00:15:13.324 00:15:13.324 ************************************ 00:15:13.324 END TEST xnvme_fio_plugin 00:15:13.324 ************************************ 00:15:13.324 00:15:13.324 real 0m13.538s 00:15:13.324 user 0m6.007s 00:15:13.324 sys 0m6.208s 00:15:13.324 19:33:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.324 19:33:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:13.324 19:33:40 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:13.324 19:33:40 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:13.324 19:33:40 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:13.324 19:33:40 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:13.324 19:33:40 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:13.324 19:33:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:13.324 19:33:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:13.324 19:33:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:13.324 19:33:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:13.324 19:33:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:13.324 19:33:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.324 19:33:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.324 ************************************ 00:15:13.324 START TEST xnvme_rpc 00:15:13.324 ************************************ 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70283 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70283 00:15:13.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70283 ']' 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:13.324 19:33:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.586 [2024-12-05 19:33:40.637685] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:15:13.586 [2024-12-05 19:33:40.637954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70283 ] 00:15:13.586 [2024-12-05 19:33:40.797093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.847 [2024-12-05 19:33:40.895824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.418 xnvme_bdev 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70283 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70283 ']' 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70283 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70283 00:15:14.418 killing process with pid 70283 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70283' 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70283 00:15:14.418 19:33:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70283 00:15:16.323 ************************************ 00:15:16.323 END TEST xnvme_rpc 00:15:16.324 ************************************ 00:15:16.324 00:15:16.324 real 0m2.612s 00:15:16.324 user 0m2.690s 00:15:16.324 sys 0m0.375s 00:15:16.324 19:33:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.324 19:33:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.324 19:33:43 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:16.324 19:33:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:16.324 19:33:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.324 19:33:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.324 ************************************ 00:15:16.324 START TEST xnvme_bdevperf 00:15:16.324 ************************************ 00:15:16.324 19:33:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:16.324 19:33:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:16.324 19:33:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:16.324 19:33:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:16.324 19:33:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:16.324 19:33:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:16.324 19:33:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:16.324 19:33:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:16.324 { 00:15:16.324 "subsystems": [ 00:15:16.324 { 00:15:16.324 "subsystem": "bdev", 00:15:16.324 "config": [ 00:15:16.324 { 00:15:16.324 "params": { 00:15:16.324 "io_mechanism": "io_uring", 00:15:16.324 "conserve_cpu": false, 00:15:16.324 "filename": "/dev/nvme0n1", 00:15:16.324 "name": "xnvme_bdev" 00:15:16.324 }, 00:15:16.324 "method": "bdev_xnvme_create" 00:15:16.324 }, 00:15:16.324 { 00:15:16.324 "method": "bdev_wait_for_examine" 00:15:16.324 } 00:15:16.324 ] 00:15:16.324 } 00:15:16.324 ] 00:15:16.324 } 00:15:16.324 [2024-12-05 19:33:43.299178] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:15:16.324 [2024-12-05 19:33:43.299409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70346 ] 00:15:16.324 [2024-12-05 19:33:43.455945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.324 [2024-12-05 19:33:43.558129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.586 Running I/O for 5 seconds... 00:15:18.910 43704.00 IOPS, 170.72 MiB/s [2024-12-05T19:33:47.108Z] 41167.50 IOPS, 160.81 MiB/s [2024-12-05T19:33:48.052Z] 40696.67 IOPS, 158.97 MiB/s [2024-12-05T19:33:48.994Z] 40069.00 IOPS, 156.52 MiB/s [2024-12-05T19:33:48.994Z] 39461.60 IOPS, 154.15 MiB/s 00:15:21.739 Latency(us) 00:15:21.739 [2024-12-05T19:33:48.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.739 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:21.739 xnvme_bdev : 5.00 39435.36 154.04 0.00 0.00 1618.59 230.01 25609.45 00:15:21.739 [2024-12-05T19:33:48.994Z] =================================================================================================================== 00:15:21.739 [2024-12-05T19:33:48.994Z] Total : 39435.36 154.04 0.00 0.00 1618.59 230.01 25609.45 00:15:22.311 19:33:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:22.311 19:33:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:22.311 19:33:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:22.311 19:33:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:22.311 19:33:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:22.311 { 00:15:22.311 "subsystems": [ 00:15:22.311 { 00:15:22.311 "subsystem": "bdev", 00:15:22.311 "config": [ 00:15:22.311 { 00:15:22.311 "params": { 00:15:22.311 "io_mechanism": "io_uring", 00:15:22.311 "conserve_cpu": false, 00:15:22.311 "filename": "/dev/nvme0n1", 00:15:22.311 "name": "xnvme_bdev" 00:15:22.311 }, 00:15:22.311 "method": "bdev_xnvme_create" 00:15:22.311 }, 00:15:22.311 { 00:15:22.311 "method": "bdev_wait_for_examine" 00:15:22.311 } 00:15:22.311 ] 00:15:22.311 } 00:15:22.311 ] 00:15:22.311 } 00:15:22.571 [2024-12-05 19:33:49.594970] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:15:22.571 [2024-12-05 19:33:49.595085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70427 ] 00:15:22.571 [2024-12-05 19:33:49.755507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.831 [2024-12-05 19:33:49.850553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.161 Running I/O for 5 seconds... 00:15:25.066 6251.00 IOPS, 24.42 MiB/s [2024-12-05T19:33:53.267Z] 6689.50 IOPS, 26.13 MiB/s [2024-12-05T19:33:54.209Z] 7349.67 IOPS, 28.71 MiB/s [2024-12-05T19:33:55.150Z] 7578.50 IOPS, 29.60 MiB/s [2024-12-05T19:33:55.150Z] 7856.60 IOPS, 30.69 MiB/s 00:15:27.895 Latency(us) 00:15:27.895 [2024-12-05T19:33:55.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.895 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:27.895 xnvme_bdev : 5.01 7847.27 30.65 0.00 0.00 8138.23 55.93 36296.86 00:15:27.895 [2024-12-05T19:33:55.150Z] =================================================================================================================== 00:15:27.895 [2024-12-05T19:33:55.150Z] Total : 7847.27 30.65 0.00 0.00 8138.23 55.93 36296.86 00:15:28.831 00:15:28.831 real 0m12.582s 00:15:28.831 user 0m5.770s 00:15:28.831 sys 0m6.567s 00:15:28.831 ************************************ 00:15:28.831 END TEST xnvme_bdevperf 00:15:28.831 ************************************ 00:15:28.831 19:33:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.831 19:33:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:28.831 19:33:55 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:28.831 19:33:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:28.831 19:33:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.831 19:33:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.831 ************************************ 00:15:28.831 START TEST xnvme_fio_plugin 00:15:28.831 ************************************ 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:28.831 19:33:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:28.831 { 00:15:28.831 "subsystems": [ 00:15:28.831 { 00:15:28.831 "subsystem": "bdev", 00:15:28.831 "config": [ 00:15:28.831 { 00:15:28.831 "params": { 00:15:28.831 "io_mechanism": "io_uring", 00:15:28.831 "conserve_cpu": false, 00:15:28.831 "filename": "/dev/nvme0n1", 00:15:28.831 "name": "xnvme_bdev" 00:15:28.831 }, 00:15:28.831 "method": "bdev_xnvme_create" 00:15:28.831 }, 00:15:28.831 { 00:15:28.831 "method": "bdev_wait_for_examine" 00:15:28.831 } 00:15:28.831 ] 00:15:28.831 } 00:15:28.831 ] 00:15:28.831 } 00:15:28.831 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:28.831 fio-3.35 00:15:28.831 Starting 1 thread 00:15:35.445 00:15:35.445 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70535: Thu Dec 5 19:34:01 2024 00:15:35.445 read: IOPS=40.7k, BW=159MiB/s (167MB/s)(795MiB/5003msec) 00:15:35.445 slat (nsec): min=2843, max=63692, avg=3239.63, stdev=1536.58 00:15:35.445 clat (usec): min=315, max=8144, avg=1444.52, stdev=244.24 00:15:35.445 lat (usec): min=318, max=8147, avg=1447.76, stdev=244.39 00:15:35.445 clat percentiles (usec): 00:15:35.445 | 1.00th=[ 889], 5.00th=[ 1037], 10.00th=[ 1139], 20.00th=[ 1270], 00:15:35.445 | 30.00th=[ 1352], 40.00th=[ 1401], 50.00th=[ 1450], 60.00th=[ 1500], 00:15:35.445 | 70.00th=[ 1549], 80.00th=[ 1614], 90.00th=[ 1729], 95.00th=[ 1827], 00:15:35.445 | 99.00th=[ 2073], 99.50th=[ 2180], 99.90th=[ 2868], 99.95th=[ 3261], 00:15:35.445 | 99.99th=[ 4490] 00:15:35.445 bw ( KiB/s): min=156672, max=169984, per=100.00%, avg=163041.78, stdev=4530.01, samples=9 00:15:35.445 iops : min=39168, max=42496, avg=40760.44, stdev=1132.50, samples=9 00:15:35.445 lat (usec) : 500=0.01%, 750=0.05%, 1000=3.67% 00:15:35.445 lat (msec) : 2=94.75%, 4=1.50%, 10=0.02% 00:15:35.445 cpu : usr=34.39%, sys=64.57%, ctx=14, majf=0, minf=762 00:15:35.445 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.3%, 16=24.9%, 32=50.5%, >=64=1.6% 00:15:35.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.445 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:35.445 issued rwts: total=203561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.445 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:35.445 00:15:35.445 Run status group 0 (all jobs): 00:15:35.445 READ: bw=159MiB/s (167MB/s), 159MiB/s-159MiB/s (167MB/s-167MB/s), io=795MiB (834MB), run=5003-5003msec 00:15:35.445 ----------------------------------------------------- 00:15:35.445 Suppressions used: 00:15:35.445 count bytes template 00:15:35.445 1 11 /usr/src/fio/parse.c 00:15:35.445 1 8 libtcmalloc_minimal.so 00:15:35.445 1 904 libcrypto.so 00:15:35.445 ----------------------------------------------------- 00:15:35.445 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:35.445 19:34:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:35.445 { 00:15:35.445 "subsystems": [ 00:15:35.445 { 00:15:35.445 "subsystem": "bdev", 00:15:35.445 "config": [ 00:15:35.445 { 00:15:35.445 "params": { 00:15:35.445 "io_mechanism": "io_uring", 00:15:35.445 "conserve_cpu": false, 00:15:35.445 "filename": "/dev/nvme0n1", 00:15:35.445 "name": "xnvme_bdev" 00:15:35.445 }, 00:15:35.445 "method": "bdev_xnvme_create" 00:15:35.445 }, 00:15:35.445 { 00:15:35.445 "method": "bdev_wait_for_examine" 00:15:35.445 } 00:15:35.445 ] 00:15:35.445 } 00:15:35.445 ] 00:15:35.445 } 00:15:35.708 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:35.708 fio-3.35 00:15:35.708 Starting 1 thread 00:15:42.317 00:15:42.317 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70633: Thu Dec 5 19:34:08 2024 00:15:42.317 write: IOPS=40.8k, BW=159MiB/s (167MB/s)(799MiB/5014msec); 0 zone resets 00:15:42.317 slat (nsec): min=2897, max=67049, avg=3994.53, stdev=1972.25 00:15:42.317 clat (usec): min=76, max=28687, avg=1414.06, stdev=893.15 00:15:42.317 lat (usec): min=79, max=28717, avg=1418.05, stdev=893.26 00:15:42.317 clat percentiles (usec): 00:15:42.317 | 1.00th=[ 824], 5.00th=[ 963], 10.00th=[ 1037], 20.00th=[ 1123], 00:15:42.317 | 30.00th=[ 1188], 40.00th=[ 1270], 50.00th=[ 1336], 60.00th=[ 1418], 00:15:42.317 | 70.00th=[ 1500], 80.00th=[ 1582], 90.00th=[ 1745], 95.00th=[ 1876], 00:15:42.317 | 99.00th=[ 2245], 99.50th=[ 3851], 99.90th=[18482], 99.95th=[19530], 00:15:42.317 | 99.99th=[21365] 00:15:42.317 bw ( KiB/s): min=120184, max=186880, per=100.00%, avg=163515.20, stdev=17369.46, samples=10 00:15:42.317 iops : min=30046, max=46720, avg=40878.80, stdev=4342.36, samples=10 00:15:42.317 lat (usec) : 100=0.01%, 250=0.03%, 500=0.13%, 750=0.44%, 1000=6.53% 00:15:42.317 lat (msec) : 2=90.50%, 4=1.89%, 10=0.25%, 20=0.20%, 50=0.03% 00:15:42.317 cpu : usr=33.93%, sys=64.93%, ctx=7, majf=0, minf=763 00:15:42.317 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.2%, 16=24.6%, 32=50.9%, >=64=1.8% 00:15:42.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.317 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:42.317 issued rwts: total=0,204453,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:42.317 00:15:42.317 Run status group 0 (all jobs): 00:15:42.317 WRITE: bw=159MiB/s (167MB/s), 159MiB/s-159MiB/s (167MB/s-167MB/s), io=799MiB (837MB), run=5014-5014msec 00:15:42.317 ----------------------------------------------------- 00:15:42.317 Suppressions used: 00:15:42.317 count bytes template 00:15:42.317 1 11 /usr/src/fio/parse.c 00:15:42.317 1 8 libtcmalloc_minimal.so 00:15:42.317 1 904 libcrypto.so 00:15:42.318 ----------------------------------------------------- 00:15:42.318 00:15:42.318 00:15:42.318 real 0m13.560s 00:15:42.318 user 0m6.146s 00:15:42.318 sys 0m6.983s 00:15:42.318 ************************************ 00:15:42.318 END TEST xnvme_fio_plugin 00:15:42.318 ************************************ 00:15:42.318 19:34:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.318 19:34:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:42.318 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:42.318 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:42.318 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:42.318 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:42.318 19:34:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:42.318 19:34:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.318 19:34:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:42.318 ************************************ 00:15:42.318 START TEST xnvme_rpc 00:15:42.318 ************************************ 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:42.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70714 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70714 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70714 ']' 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.318 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.579 [2024-12-05 19:34:09.605057] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:15:42.579 [2024-12-05 19:34:09.605433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70714 ] 00:15:42.579 [2024-12-05 19:34:09.769127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.841 [2024-12-05 19:34:09.905464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.412 xnvme_bdev 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.412 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70714 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70714 ']' 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70714 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70714 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.671 killing process with pid 70714 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70714' 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70714 00:15:43.671 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70714 00:15:45.601 00:15:45.601 real 0m2.950s 00:15:45.601 user 0m2.937s 00:15:45.601 sys 0m0.498s 00:15:45.601 19:34:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.601 19:34:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.601 ************************************ 00:15:45.601 END TEST xnvme_rpc 00:15:45.601 ************************************ 00:15:45.601 19:34:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:45.601 19:34:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:45.601 19:34:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.601 19:34:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.601 ************************************ 00:15:45.601 START TEST xnvme_bdevperf 00:15:45.601 ************************************ 00:15:45.601 19:34:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:45.601 19:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:45.601 19:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:45.601 19:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:45.601 19:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:45.601 19:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:45.601 19:34:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:45.601 19:34:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:45.601 { 00:15:45.601 "subsystems": [ 00:15:45.601 { 00:15:45.601 "subsystem": "bdev", 00:15:45.601 "config": [ 00:15:45.601 { 00:15:45.601 "params": { 00:15:45.601 "io_mechanism": "io_uring", 00:15:45.601 "conserve_cpu": true, 00:15:45.601 "filename": "/dev/nvme0n1", 00:15:45.601 "name": "xnvme_bdev" 00:15:45.601 }, 00:15:45.601 "method": "bdev_xnvme_create" 00:15:45.601 }, 00:15:45.601 { 00:15:45.601 "method": "bdev_wait_for_examine" 00:15:45.601 } 00:15:45.601 ] 00:15:45.601 } 00:15:45.601 ] 00:15:45.601 } 00:15:45.601 [2024-12-05 19:34:12.611585] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:15:45.601 [2024-12-05 19:34:12.611960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70788 ] 00:15:45.601 [2024-12-05 19:34:12.779167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.863 [2024-12-05 19:34:12.907364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.124 Running I/O for 5 seconds... 00:15:48.021 32801.00 IOPS, 128.13 MiB/s [2024-12-05T19:34:16.220Z] 32608.00 IOPS, 127.38 MiB/s [2024-12-05T19:34:17.605Z] 32323.33 IOPS, 126.26 MiB/s [2024-12-05T19:34:18.550Z] 32271.00 IOPS, 126.06 MiB/s 00:15:51.295 Latency(us) 00:15:51.295 [2024-12-05T19:34:18.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.295 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:51.295 xnvme_bdev : 5.00 32131.13 125.51 0.00 0.00 1986.92 409.60 26214.40 00:15:51.295 [2024-12-05T19:34:18.550Z] =================================================================================================================== 00:15:51.295 [2024-12-05T19:34:18.550Z] Total : 32131.13 125.51 0.00 0.00 1986.92 409.60 26214.40 00:15:51.864 19:34:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:51.864 19:34:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:51.864 19:34:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:51.864 19:34:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:51.864 19:34:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:51.864 { 00:15:51.864 "subsystems": [ 00:15:51.864 { 00:15:51.864 "subsystem": "bdev", 00:15:51.864 "config": [ 00:15:51.864 { 00:15:51.864 "params": { 00:15:51.864 "io_mechanism": "io_uring", 00:15:51.864 "conserve_cpu": true, 00:15:51.864 "filename": "/dev/nvme0n1", 00:15:51.864 "name": "xnvme_bdev" 00:15:51.864 }, 00:15:51.864 "method": "bdev_xnvme_create" 00:15:51.864 }, 00:15:51.864 { 00:15:51.864 "method": "bdev_wait_for_examine" 00:15:51.864 } 00:15:51.864 ] 00:15:51.864 } 00:15:51.864 ] 00:15:51.864 } 00:15:51.864 [2024-12-05 19:34:19.064713] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:15:51.864 [2024-12-05 19:34:19.065141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70864 ] 00:15:52.125 [2024-12-05 19:34:19.229717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.125 [2024-12-05 19:34:19.355706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.697 Running I/O for 5 seconds... 00:15:54.586 10470.00 IOPS, 40.90 MiB/s [2024-12-05T19:34:22.785Z] 10415.50 IOPS, 40.69 MiB/s [2024-12-05T19:34:23.732Z] 10372.33 IOPS, 40.52 MiB/s [2024-12-05T19:34:24.690Z] 10308.50 IOPS, 40.27 MiB/s [2024-12-05T19:34:24.958Z] 10319.60 IOPS, 40.31 MiB/s 00:15:57.703 Latency(us) 00:15:57.703 [2024-12-05T19:34:24.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.703 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:57.703 xnvme_bdev : 5.01 10311.12 40.28 0.00 0.00 6196.76 74.04 27021.00 00:15:57.703 [2024-12-05T19:34:24.958Z] =================================================================================================================== 00:15:57.703 [2024-12-05T19:34:24.958Z] Total : 10311.12 40.28 0.00 0.00 6196.76 74.04 27021.00 00:15:58.275 00:15:58.275 real 0m12.947s 00:15:58.275 user 0m9.166s 00:15:58.275 sys 0m2.818s 00:15:58.275 19:34:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.275 ************************************ 00:15:58.275 END TEST xnvme_bdevperf 00:15:58.275 ************************************ 00:15:58.275 19:34:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:58.535 19:34:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:58.535 19:34:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:58.535 19:34:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.535 19:34:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.535 ************************************ 00:15:58.535 START TEST xnvme_fio_plugin 00:15:58.535 ************************************ 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:58.535 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:58.535 { 00:15:58.535 "subsystems": [ 00:15:58.535 { 00:15:58.535 "subsystem": "bdev", 00:15:58.535 "config": [ 00:15:58.535 { 00:15:58.535 "params": { 00:15:58.535 "io_mechanism": "io_uring", 00:15:58.535 "conserve_cpu": true, 00:15:58.535 "filename": "/dev/nvme0n1", 00:15:58.535 "name": "xnvme_bdev" 00:15:58.535 }, 00:15:58.535 "method": "bdev_xnvme_create" 00:15:58.535 }, 00:15:58.535 { 00:15:58.535 "method": "bdev_wait_for_examine" 00:15:58.535 } 00:15:58.535 ] 00:15:58.535 } 00:15:58.536 ] 00:15:58.536 } 00:15:58.536 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:58.536 fio-3.35 00:15:58.536 Starting 1 thread 00:16:05.174 00:16:05.174 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70978: Thu Dec 5 19:34:31 2024 00:16:05.174 read: IOPS=33.7k, BW=131MiB/s (138MB/s)(658MiB/5002msec) 00:16:05.174 slat (nsec): min=2848, max=93765, avg=3997.19, stdev=2250.18 00:16:05.174 clat (usec): min=1014, max=3278, avg=1738.69, stdev=292.61 00:16:05.174 lat (usec): min=1017, max=3308, avg=1742.69, stdev=293.07 00:16:05.174 clat percentiles (usec): 00:16:05.174 | 1.00th=[ 1221], 5.00th=[ 1336], 10.00th=[ 1401], 20.00th=[ 1500], 00:16:05.174 | 30.00th=[ 1565], 40.00th=[ 1631], 50.00th=[ 1696], 60.00th=[ 1778], 00:16:05.174 | 70.00th=[ 1860], 80.00th=[ 1958], 90.00th=[ 2147], 95.00th=[ 2278], 00:16:05.174 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 2999], 99.95th=[ 3097], 00:16:05.174 | 99.99th=[ 3195] 00:16:05.174 bw ( KiB/s): min=129024, max=139497, per=99.51%, avg=133942.33, stdev=3366.73, samples=9 00:16:05.174 iops : min=32256, max=34874, avg=33485.56, stdev=841.63, samples=9 00:16:05.174 lat (msec) : 2=82.63%, 4=17.37% 00:16:05.174 cpu : usr=46.69%, sys=48.93%, ctx=10, majf=0, minf=762 00:16:05.174 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:05.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.174 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:05.174 issued rwts: total=168320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.174 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:05.174 00:16:05.174 Run status group 0 (all jobs): 00:16:05.174 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=658MiB (689MB), run=5002-5002msec 00:16:05.174 ----------------------------------------------------- 00:16:05.174 Suppressions used: 00:16:05.174 count bytes template 00:16:05.174 1 11 /usr/src/fio/parse.c 00:16:05.174 1 8 libtcmalloc_minimal.so 00:16:05.174 1 904 libcrypto.so 00:16:05.174 ----------------------------------------------------- 00:16:05.174 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:05.453 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:05.453 { 00:16:05.453 "subsystems": [ 00:16:05.453 { 00:16:05.453 "subsystem": "bdev", 00:16:05.453 "config": [ 00:16:05.453 { 00:16:05.453 "params": { 00:16:05.453 "io_mechanism": "io_uring", 00:16:05.453 "conserve_cpu": true, 00:16:05.453 "filename": "/dev/nvme0n1", 00:16:05.453 "name": "xnvme_bdev" 00:16:05.453 }, 00:16:05.453 "method": "bdev_xnvme_create" 00:16:05.453 }, 00:16:05.453 { 00:16:05.453 "method": "bdev_wait_for_examine" 00:16:05.453 } 00:16:05.453 ] 00:16:05.453 } 00:16:05.453 ] 00:16:05.453 } 00:16:05.453 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:05.453 fio-3.35 00:16:05.453 Starting 1 thread 00:16:12.104 00:16:12.104 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71074: Thu Dec 5 19:34:38 2024 00:16:12.104 write: IOPS=41.9k, BW=164MiB/s (172MB/s)(821MiB/5013msec); 0 zone resets 00:16:12.104 slat (usec): min=2, max=121, avg= 3.93, stdev= 1.98 00:16:12.104 clat (usec): min=55, max=29324, avg=1374.77, stdev=1560.58 00:16:12.104 lat (usec): min=59, max=29328, avg=1378.70, stdev=1560.77 00:16:12.104 clat percentiles (usec): 00:16:12.104 | 1.00th=[ 388], 5.00th=[ 701], 10.00th=[ 742], 20.00th=[ 816], 00:16:12.104 | 30.00th=[ 889], 40.00th=[ 1012], 50.00th=[ 1188], 60.00th=[ 1369], 00:16:12.104 | 70.00th=[ 1500], 80.00th=[ 1631], 90.00th=[ 1811], 95.00th=[ 1991], 00:16:12.104 | 99.00th=[ 9896], 99.50th=[15795], 99.90th=[19268], 99.95th=[20317], 00:16:12.104 | 99.99th=[24249] 00:16:12.104 bw ( KiB/s): min=43064, max=254976, per=100.00%, avg=168172.80, stdev=64211.02, samples=10 00:16:12.104 iops : min=10766, max=63744, avg=42043.20, stdev=16052.76, samples=10 00:16:12.104 lat (usec) : 100=0.03%, 250=0.56%, 500=0.64%, 750=9.97%, 1000=28.13% 00:16:12.104 lat (msec) : 2=55.88%, 4=3.76%, 10=0.03%, 20=0.93%, 50=0.06% 00:16:12.104 cpu : usr=47.96%, sys=47.47%, ctx=9, majf=0, minf=763 00:16:12.104 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.2%, 16=24.3%, 32=50.5%, >=64=2.4% 00:16:12.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.104 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:12.104 issued rwts: total=0,210268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.104 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:12.104 00:16:12.104 Run status group 0 (all jobs): 00:16:12.104 WRITE: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=821MiB (861MB), run=5013-5013msec 00:16:12.104 ----------------------------------------------------- 00:16:12.104 Suppressions used: 00:16:12.104 count bytes template 00:16:12.104 1 11 /usr/src/fio/parse.c 00:16:12.104 1 8 libtcmalloc_minimal.so 00:16:12.104 1 904 libcrypto.so 00:16:12.104 ----------------------------------------------------- 00:16:12.104 00:16:12.104 00:16:12.104 real 0m13.711s 00:16:12.104 user 0m7.604s 00:16:12.104 sys 0m5.339s 00:16:12.104 19:34:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.104 19:34:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:12.104 ************************************ 00:16:12.104 END TEST xnvme_fio_plugin 00:16:12.104 ************************************ 00:16:12.104 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:12.104 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:16:12.104 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:16:12.104 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:16:12.104 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:12.104 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:12.105 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:12.105 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:12.105 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:12.105 19:34:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:12.105 19:34:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.105 19:34:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.105 ************************************ 00:16:12.105 START TEST xnvme_rpc 00:16:12.105 ************************************ 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71156 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71156 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71156 ']' 00:16:12.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.105 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.366 [2024-12-05 19:34:39.425178] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:16:12.366 [2024-12-05 19:34:39.425330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71156 ] 00:16:12.366 [2024-12-05 19:34:39.590298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.626 [2024-12-05 19:34:39.718888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.196 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.196 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:13.196 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:16:13.196 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.196 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.196 xnvme_bdev 00:16:13.196 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.196 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:13.196 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:13.196 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.196 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.196 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:13.196 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.454 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:13.454 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:13.454 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71156 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71156 ']' 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71156 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71156 00:16:13.455 killing process with pid 71156 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71156' 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71156 00:16:13.455 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71156 00:16:15.367 ************************************ 00:16:15.367 END TEST xnvme_rpc 00:16:15.367 ************************************ 00:16:15.367 00:16:15.367 real 0m2.888s 00:16:15.367 user 0m2.909s 00:16:15.367 sys 0m0.474s 00:16:15.367 19:34:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.367 19:34:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.367 19:34:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:15.367 19:34:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.367 19:34:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.367 19:34:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:15.367 ************************************ 00:16:15.367 START TEST xnvme_bdevperf 00:16:15.367 ************************************ 00:16:15.367 19:34:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:15.367 19:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:15.367 19:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:15.367 19:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:15.367 19:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:15.367 19:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:15.367 19:34:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:15.367 19:34:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:15.367 { 00:16:15.367 "subsystems": [ 00:16:15.367 { 00:16:15.367 "subsystem": "bdev", 00:16:15.367 "config": [ 00:16:15.367 { 00:16:15.367 "params": { 00:16:15.367 "io_mechanism": "io_uring_cmd", 00:16:15.367 "conserve_cpu": false, 00:16:15.367 "filename": "/dev/ng0n1", 00:16:15.367 "name": "xnvme_bdev" 00:16:15.367 }, 00:16:15.367 "method": "bdev_xnvme_create" 00:16:15.367 }, 00:16:15.367 { 00:16:15.367 "method": "bdev_wait_for_examine" 00:16:15.367 } 00:16:15.367 ] 00:16:15.367 } 00:16:15.367 ] 00:16:15.367 } 00:16:15.367 [2024-12-05 19:34:42.354414] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:16:15.367 [2024-12-05 19:34:42.354738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71230 ] 00:16:15.367 [2024-12-05 19:34:42.517010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.628 [2024-12-05 19:34:42.639960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.888 Running I/O for 5 seconds... 00:16:17.762 34325.00 IOPS, 134.08 MiB/s [2024-12-05T19:34:45.975Z] 45703.50 IOPS, 178.53 MiB/s [2024-12-05T19:34:47.361Z] 43891.00 IOPS, 171.45 MiB/s [2024-12-05T19:34:47.934Z] 41797.25 IOPS, 163.27 MiB/s 00:16:20.679 Latency(us) 00:16:20.679 [2024-12-05T19:34:47.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.679 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:20.679 xnvme_bdev : 5.00 40505.08 158.22 0.00 0.00 1575.61 326.10 12300.60 00:16:20.679 [2024-12-05T19:34:47.934Z] =================================================================================================================== 00:16:20.679 [2024-12-05T19:34:47.934Z] Total : 40505.08 158.22 0.00 0.00 1575.61 326.10 12300.60 00:16:21.647 19:34:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:21.647 19:34:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:21.647 19:34:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:21.647 19:34:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:21.647 19:34:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:21.647 { 00:16:21.647 "subsystems": [ 00:16:21.647 { 00:16:21.647 "subsystem": "bdev", 00:16:21.647 "config": [ 00:16:21.647 { 00:16:21.647 "params": { 00:16:21.647 "io_mechanism": "io_uring_cmd", 00:16:21.647 "conserve_cpu": false, 00:16:21.647 "filename": "/dev/ng0n1", 00:16:21.647 "name": "xnvme_bdev" 00:16:21.647 }, 00:16:21.647 "method": "bdev_xnvme_create" 00:16:21.647 }, 00:16:21.647 { 00:16:21.647 "method": "bdev_wait_for_examine" 00:16:21.647 } 00:16:21.647 ] 00:16:21.647 } 00:16:21.647 ] 00:16:21.647 } 00:16:21.647 [2024-12-05 19:34:48.788091] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:16:21.647 [2024-12-05 19:34:48.788236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71305 ] 00:16:21.907 [2024-12-05 19:34:48.950566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.907 [2024-12-05 19:34:49.078271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.167 Running I/O for 5 seconds... 00:16:24.120 37400.00 IOPS, 146.09 MiB/s [2024-12-05T19:34:52.754Z] 34090.00 IOPS, 133.16 MiB/s [2024-12-05T19:34:53.695Z] 34520.00 IOPS, 134.84 MiB/s [2024-12-05T19:34:54.636Z] 34672.25 IOPS, 135.44 MiB/s 00:16:27.381 Latency(us) 00:16:27.381 [2024-12-05T19:34:54.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.381 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:27.381 xnvme_bdev : 5.00 32554.22 127.16 0.00 0.00 1960.95 55.53 18955.03 00:16:27.381 [2024-12-05T19:34:54.636Z] =================================================================================================================== 00:16:27.381 [2024-12-05T19:34:54.636Z] Total : 32554.22 127.16 0.00 0.00 1960.95 55.53 18955.03 00:16:27.952 19:34:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:27.952 19:34:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:27.952 19:34:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:27.952 19:34:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:27.952 19:34:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:27.952 { 00:16:27.952 "subsystems": [ 00:16:27.952 { 00:16:27.952 "subsystem": "bdev", 00:16:27.952 "config": [ 00:16:27.952 { 00:16:27.952 "params": { 00:16:27.952 "io_mechanism": "io_uring_cmd", 00:16:27.952 "conserve_cpu": false, 00:16:27.952 "filename": "/dev/ng0n1", 00:16:27.952 "name": "xnvme_bdev" 00:16:27.952 }, 00:16:27.953 "method": "bdev_xnvme_create" 00:16:27.953 }, 00:16:27.953 { 00:16:27.953 "method": "bdev_wait_for_examine" 00:16:27.953 } 00:16:27.953 ] 00:16:27.953 } 00:16:27.953 ] 00:16:27.953 } 00:16:28.212 [2024-12-05 19:34:55.229248] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:16:28.212 [2024-12-05 19:34:55.229561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71384 ] 00:16:28.212 [2024-12-05 19:34:55.395498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.473 [2024-12-05 19:34:55.515008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.734 Running I/O for 5 seconds... 00:16:30.616 78400.00 IOPS, 306.25 MiB/s [2024-12-05T19:34:58.816Z] 78592.00 IOPS, 307.00 MiB/s [2024-12-05T19:35:00.200Z] 78741.33 IOPS, 307.58 MiB/s [2024-12-05T19:35:01.142Z] 78848.00 IOPS, 308.00 MiB/s [2024-12-05T19:35:01.142Z] 78771.20 IOPS, 307.70 MiB/s 00:16:33.887 Latency(us) 00:16:33.887 [2024-12-05T19:35:01.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.887 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:33.887 xnvme_bdev : 5.00 78740.85 307.58 0.00 0.00 809.44 526.18 2734.87 00:16:33.887 [2024-12-05T19:35:01.142Z] =================================================================================================================== 00:16:33.887 [2024-12-05T19:35:01.142Z] Total : 78740.85 307.58 0.00 0.00 809.44 526.18 2734.87 00:16:34.457 19:35:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:34.457 19:35:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:34.457 19:35:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:34.457 19:35:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:34.457 19:35:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:34.457 { 00:16:34.457 "subsystems": [ 00:16:34.457 { 00:16:34.457 "subsystem": "bdev", 00:16:34.457 "config": [ 00:16:34.457 { 00:16:34.457 "params": { 00:16:34.457 "io_mechanism": "io_uring_cmd", 00:16:34.457 "conserve_cpu": false, 00:16:34.457 "filename": "/dev/ng0n1", 00:16:34.457 "name": "xnvme_bdev" 00:16:34.457 }, 00:16:34.457 "method": "bdev_xnvme_create" 00:16:34.457 }, 00:16:34.457 { 00:16:34.457 "method": "bdev_wait_for_examine" 00:16:34.457 } 00:16:34.457 ] 00:16:34.457 } 00:16:34.457 ] 00:16:34.457 } 00:16:34.457 [2024-12-05 19:35:01.685542] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:16:34.457 [2024-12-05 19:35:01.685703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71454 ] 00:16:34.716 [2024-12-05 19:35:01.848784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.716 [2024-12-05 19:35:01.969268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.285 Running I/O for 5 seconds... 00:16:37.161 30311.00 IOPS, 118.40 MiB/s [2024-12-05T19:35:05.356Z] 24136.50 IOPS, 94.28 MiB/s [2024-12-05T19:35:06.295Z] 20893.67 IOPS, 81.62 MiB/s [2024-12-05T19:35:07.699Z] 19075.25 IOPS, 74.51 MiB/s [2024-12-05T19:35:07.699Z] 18558.00 IOPS, 72.49 MiB/s 00:16:40.444 Latency(us) 00:16:40.444 [2024-12-05T19:35:07.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.444 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:40.444 xnvme_bdev : 5.01 18533.55 72.40 0.00 0.00 3446.54 68.53 359742.23 00:16:40.444 [2024-12-05T19:35:07.699Z] =================================================================================================================== 00:16:40.444 [2024-12-05T19:35:07.699Z] Total : 18533.55 72.40 0.00 0.00 3446.54 68.53 359742.23 00:16:41.015 00:16:41.015 real 0m25.781s 00:16:41.015 user 0m14.147s 00:16:41.015 sys 0m11.114s 00:16:41.015 ************************************ 00:16:41.015 END TEST xnvme_bdevperf 00:16:41.015 ************************************ 00:16:41.015 19:35:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.015 19:35:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:41.015 19:35:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:41.015 19:35:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:41.015 19:35:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.015 19:35:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.015 ************************************ 00:16:41.015 START TEST xnvme_fio_plugin 00:16:41.015 ************************************ 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:41.015 19:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:41.015 { 00:16:41.015 "subsystems": [ 00:16:41.015 { 00:16:41.015 "subsystem": "bdev", 00:16:41.015 "config": [ 00:16:41.015 { 00:16:41.015 "params": { 00:16:41.015 "io_mechanism": "io_uring_cmd", 00:16:41.015 "conserve_cpu": false, 00:16:41.015 "filename": "/dev/ng0n1", 00:16:41.015 "name": "xnvme_bdev" 00:16:41.015 }, 00:16:41.015 "method": "bdev_xnvme_create" 00:16:41.015 }, 00:16:41.015 { 00:16:41.015 "method": "bdev_wait_for_examine" 00:16:41.015 } 00:16:41.015 ] 00:16:41.015 } 00:16:41.015 ] 00:16:41.015 } 00:16:41.276 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:41.276 fio-3.35 00:16:41.276 Starting 1 thread 00:16:47.867 00:16:47.867 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71572: Thu Dec 5 19:35:14 2024 00:16:47.867 read: IOPS=33.9k, BW=132MiB/s (139MB/s)(662MiB/5001msec) 00:16:47.867 slat (usec): min=2, max=126, avg= 3.82, stdev= 2.23 00:16:47.867 clat (usec): min=887, max=3569, avg=1733.37, stdev=325.98 00:16:47.867 lat (usec): min=890, max=3601, avg=1737.19, stdev=326.37 00:16:47.867 clat percentiles (usec): 00:16:47.867 | 1.00th=[ 1139], 5.00th=[ 1270], 10.00th=[ 1352], 20.00th=[ 1450], 00:16:47.867 | 30.00th=[ 1532], 40.00th=[ 1614], 50.00th=[ 1696], 60.00th=[ 1778], 00:16:47.867 | 70.00th=[ 1876], 80.00th=[ 1991], 90.00th=[ 2147], 95.00th=[ 2311], 00:16:47.867 | 99.00th=[ 2671], 99.50th=[ 2802], 99.90th=[ 3195], 99.95th=[ 3294], 00:16:47.867 | 99.99th=[ 3425] 00:16:47.867 bw ( KiB/s): min=133632, max=138240, per=100.00%, avg=136248.89, stdev=1725.76, samples=9 00:16:47.867 iops : min=33408, max=34560, avg=34062.22, stdev=431.44, samples=9 00:16:47.867 lat (usec) : 1000=0.09% 00:16:47.867 lat (msec) : 2=80.73%, 4=19.18% 00:16:47.867 cpu : usr=35.62%, sys=63.06%, ctx=13, majf=0, minf=762 00:16:47.867 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.867 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:47.867 issued rwts: total=169472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.867 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.867 00:16:47.867 Run status group 0 (all jobs): 00:16:47.867 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=662MiB (694MB), run=5001-5001msec 00:16:47.867 ----------------------------------------------------- 00:16:47.867 Suppressions used: 00:16:47.867 count bytes template 00:16:47.867 1 11 /usr/src/fio/parse.c 00:16:47.867 1 8 libtcmalloc_minimal.so 00:16:47.867 1 904 libcrypto.so 00:16:47.867 ----------------------------------------------------- 00:16:47.867 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:47.867 19:35:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:47.867 { 00:16:47.867 "subsystems": [ 00:16:47.867 { 00:16:47.867 "subsystem": "bdev", 00:16:47.867 "config": [ 00:16:47.867 { 00:16:47.867 "params": { 00:16:47.867 "io_mechanism": "io_uring_cmd", 00:16:47.867 "conserve_cpu": false, 00:16:47.867 "filename": "/dev/ng0n1", 00:16:47.867 "name": "xnvme_bdev" 00:16:47.867 }, 00:16:47.867 "method": "bdev_xnvme_create" 00:16:47.867 }, 00:16:47.867 { 00:16:47.867 "method": "bdev_wait_for_examine" 00:16:47.867 } 00:16:47.867 ] 00:16:47.867 } 00:16:47.867 ] 00:16:47.867 } 00:16:48.128 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:48.128 fio-3.35 00:16:48.128 Starting 1 thread 00:16:54.713 00:16:54.713 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71663: Thu Dec 5 19:35:20 2024 00:16:54.713 write: IOPS=31.7k, BW=124MiB/s (130MB/s)(619MiB/5001msec); 0 zone resets 00:16:54.713 slat (nsec): min=2901, max=78204, avg=4161.30, stdev=2614.44 00:16:54.713 clat (usec): min=83, max=25055, avg=1853.39, stdev=1682.47 00:16:54.714 lat (usec): min=86, max=25059, avg=1857.56, stdev=1682.54 00:16:54.714 clat percentiles (usec): 00:16:54.714 | 1.00th=[ 734], 5.00th=[ 1254], 10.00th=[ 1319], 20.00th=[ 1418], 00:16:54.714 | 30.00th=[ 1500], 40.00th=[ 1582], 50.00th=[ 1647], 60.00th=[ 1729], 00:16:54.714 | 70.00th=[ 1811], 80.00th=[ 1926], 90.00th=[ 2114], 95.00th=[ 2311], 00:16:54.714 | 99.00th=[13698], 99.50th=[17171], 99.90th=[20579], 99.95th=[21365], 00:16:54.714 | 99.99th=[23200] 00:16:54.714 bw ( KiB/s): min=82048, max=143216, per=98.98%, avg=125494.22, stdev=22572.27, samples=9 00:16:54.714 iops : min=20512, max=35804, avg=31373.56, stdev=5643.07, samples=9 00:16:54.714 lat (usec) : 100=0.01%, 250=0.18%, 500=0.48%, 750=0.36%, 1000=0.28% 00:16:54.714 lat (msec) : 2=83.71%, 4=13.70%, 10=0.10%, 20=1.04%, 50=0.13% 00:16:54.714 cpu : usr=35.64%, sys=62.98%, ctx=9, majf=0, minf=763 00:16:54.714 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.0%, 16=24.3%, 32=51.0%, >=64=2.2% 00:16:54.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.714 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:54.714 issued rwts: total=0,158514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:54.714 00:16:54.714 Run status group 0 (all jobs): 00:16:54.714 WRITE: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=619MiB (649MB), run=5001-5001msec 00:16:54.714 ----------------------------------------------------- 00:16:54.714 Suppressions used: 00:16:54.714 count bytes template 00:16:54.714 1 11 /usr/src/fio/parse.c 00:16:54.714 1 8 libtcmalloc_minimal.so 00:16:54.714 1 904 libcrypto.so 00:16:54.714 ----------------------------------------------------- 00:16:54.714 00:16:54.974 ************************************ 00:16:54.974 END TEST xnvme_fio_plugin 00:16:54.974 ************************************ 00:16:54.974 00:16:54.974 real 0m13.826s 00:16:54.974 user 0m6.502s 00:16:54.974 sys 0m6.857s 00:16:54.974 19:35:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.974 19:35:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:54.974 19:35:22 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:54.974 19:35:22 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:54.974 19:35:22 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:54.974 19:35:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:54.974 19:35:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:54.974 19:35:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.974 19:35:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:54.974 ************************************ 00:16:54.974 START TEST xnvme_rpc 00:16:54.974 ************************************ 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71748 00:16:54.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71748 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71748 ']' 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.974 19:35:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.974 [2024-12-05 19:35:22.128001] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:16:54.974 [2024-12-05 19:35:22.128152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71748 ] 00:16:55.234 [2024-12-05 19:35:22.290522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.235 [2024-12-05 19:35:22.411249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.179 xnvme_bdev 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71748 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71748 ']' 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71748 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71748 00:16:56.179 killing process with pid 71748 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71748' 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71748 00:16:56.179 19:35:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71748 00:16:58.094 ************************************ 00:16:58.094 END TEST xnvme_rpc 00:16:58.094 ************************************ 00:16:58.094 00:16:58.094 real 0m2.914s 00:16:58.094 user 0m2.905s 00:16:58.094 sys 0m0.472s 00:16:58.094 19:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.094 19:35:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.094 19:35:25 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:58.094 19:35:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:58.094 19:35:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.094 19:35:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.094 ************************************ 00:16:58.094 START TEST xnvme_bdevperf 00:16:58.094 ************************************ 00:16:58.094 19:35:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:58.094 19:35:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:58.094 19:35:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:58.094 19:35:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:58.094 19:35:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:58.094 19:35:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:58.094 19:35:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:58.094 19:35:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:58.094 { 00:16:58.094 "subsystems": [ 00:16:58.094 { 00:16:58.094 "subsystem": "bdev", 00:16:58.094 "config": [ 00:16:58.094 { 00:16:58.094 "params": { 00:16:58.094 "io_mechanism": "io_uring_cmd", 00:16:58.094 "conserve_cpu": true, 00:16:58.094 "filename": "/dev/ng0n1", 00:16:58.094 "name": "xnvme_bdev" 00:16:58.094 }, 00:16:58.094 "method": "bdev_xnvme_create" 00:16:58.094 }, 00:16:58.094 { 00:16:58.094 "method": "bdev_wait_for_examine" 00:16:58.094 } 00:16:58.094 ] 00:16:58.094 } 00:16:58.094 ] 00:16:58.094 } 00:16:58.094 [2024-12-05 19:35:25.095885] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:16:58.094 [2024-12-05 19:35:25.096029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71821 ] 00:16:58.094 [2024-12-05 19:35:25.260365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.355 [2024-12-05 19:35:25.382923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.616 Running I/O for 5 seconds... 00:17:00.507 35182.00 IOPS, 137.43 MiB/s [2024-12-05T19:35:28.705Z] 34894.50 IOPS, 136.31 MiB/s [2024-12-05T19:35:30.168Z] 35044.33 IOPS, 136.89 MiB/s [2024-12-05T19:35:30.736Z] 34849.25 IOPS, 136.13 MiB/s 00:17:03.481 Latency(us) 00:17:03.481 [2024-12-05T19:35:30.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.481 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:03.481 xnvme_bdev : 5.00 34496.76 134.75 0.00 0.00 1850.82 768.79 12703.90 00:17:03.481 [2024-12-05T19:35:30.736Z] =================================================================================================================== 00:17:03.481 [2024-12-05T19:35:30.736Z] Total : 34496.76 134.75 0.00 0.00 1850.82 768.79 12703.90 00:17:04.422 19:35:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:04.422 19:35:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:04.422 19:35:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:04.422 19:35:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:04.422 19:35:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:04.422 { 00:17:04.422 "subsystems": [ 00:17:04.422 { 00:17:04.422 "subsystem": "bdev", 00:17:04.422 "config": [ 00:17:04.422 { 00:17:04.422 "params": { 00:17:04.422 "io_mechanism": "io_uring_cmd", 00:17:04.422 "conserve_cpu": true, 00:17:04.422 "filename": "/dev/ng0n1", 00:17:04.422 "name": "xnvme_bdev" 00:17:04.422 }, 00:17:04.422 "method": "bdev_xnvme_create" 00:17:04.422 }, 00:17:04.422 { 00:17:04.422 "method": "bdev_wait_for_examine" 00:17:04.422 } 00:17:04.422 ] 00:17:04.422 } 00:17:04.422 ] 00:17:04.422 } 00:17:04.422 [2024-12-05 19:35:31.541590] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:17:04.422 [2024-12-05 19:35:31.541752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71891 ] 00:17:04.683 [2024-12-05 19:35:31.706807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.683 [2024-12-05 19:35:31.830579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.942 Running I/O for 5 seconds... 00:17:07.269 29570.00 IOPS, 115.51 MiB/s [2024-12-05T19:35:35.464Z] 31837.00 IOPS, 124.36 MiB/s [2024-12-05T19:35:36.406Z] 31147.33 IOPS, 121.67 MiB/s [2024-12-05T19:35:37.351Z] 30831.75 IOPS, 120.44 MiB/s [2024-12-05T19:35:37.351Z] 30527.20 IOPS, 119.25 MiB/s 00:17:10.096 Latency(us) 00:17:10.096 [2024-12-05T19:35:37.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.096 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:10.096 xnvme_bdev : 5.00 30526.64 119.24 0.00 0.00 2091.64 71.29 22282.24 00:17:10.096 [2024-12-05T19:35:37.351Z] =================================================================================================================== 00:17:10.096 [2024-12-05T19:35:37.351Z] Total : 30526.64 119.24 0.00 0.00 2091.64 71.29 22282.24 00:17:10.674 19:35:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:10.674 19:35:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:10.674 19:35:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:10.674 19:35:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:10.674 19:35:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:10.933 { 00:17:10.933 "subsystems": [ 00:17:10.933 { 00:17:10.933 "subsystem": "bdev", 00:17:10.933 "config": [ 00:17:10.933 { 00:17:10.933 "params": { 00:17:10.933 "io_mechanism": "io_uring_cmd", 00:17:10.933 "conserve_cpu": true, 00:17:10.934 "filename": "/dev/ng0n1", 00:17:10.934 "name": "xnvme_bdev" 00:17:10.934 }, 00:17:10.934 "method": "bdev_xnvme_create" 00:17:10.934 }, 00:17:10.934 { 00:17:10.934 "method": "bdev_wait_for_examine" 00:17:10.934 } 00:17:10.934 ] 00:17:10.934 } 00:17:10.934 ] 00:17:10.934 } 00:17:10.934 [2024-12-05 19:35:37.995904] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:17:10.934 [2024-12-05 19:35:37.996050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71966 ] 00:17:10.934 [2024-12-05 19:35:38.155285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.194 [2024-12-05 19:35:38.275723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.455 Running I/O for 5 seconds... 00:17:13.340 78848.00 IOPS, 308.00 MiB/s [2024-12-05T19:35:41.984Z] 79040.00 IOPS, 308.75 MiB/s [2024-12-05T19:35:42.924Z] 78848.00 IOPS, 308.00 MiB/s [2024-12-05T19:35:43.864Z] 79024.00 IOPS, 308.69 MiB/s 00:17:16.609 Latency(us) 00:17:16.609 [2024-12-05T19:35:43.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.609 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:16.609 xnvme_bdev : 5.00 81228.86 317.30 0.00 0.00 784.47 370.22 3037.34 00:17:16.609 [2024-12-05T19:35:43.864Z] =================================================================================================================== 00:17:16.609 [2024-12-05T19:35:43.864Z] Total : 81228.86 317.30 0.00 0.00 784.47 370.22 3037.34 00:17:17.180 19:35:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:17.180 19:35:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:17.180 19:35:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:17.180 19:35:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:17.180 19:35:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:17.180 { 00:17:17.180 "subsystems": [ 00:17:17.180 { 00:17:17.180 "subsystem": "bdev", 00:17:17.180 "config": [ 00:17:17.180 { 00:17:17.180 "params": { 00:17:17.180 "io_mechanism": "io_uring_cmd", 00:17:17.180 "conserve_cpu": true, 00:17:17.180 "filename": "/dev/ng0n1", 00:17:17.180 "name": "xnvme_bdev" 00:17:17.180 }, 00:17:17.180 "method": "bdev_xnvme_create" 00:17:17.180 }, 00:17:17.180 { 00:17:17.180 "method": "bdev_wait_for_examine" 00:17:17.180 } 00:17:17.180 ] 00:17:17.180 } 00:17:17.180 ] 00:17:17.180 } 00:17:17.180 [2024-12-05 19:35:44.346512] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:17:17.180 [2024-12-05 19:35:44.346625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72046 ] 00:17:17.441 [2024-12-05 19:35:44.509858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.442 [2024-12-05 19:35:44.607882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.699 Running I/O for 5 seconds... 00:17:20.020 44584.00 IOPS, 174.16 MiB/s [2024-12-05T19:35:48.214Z] 48735.00 IOPS, 190.37 MiB/s [2024-12-05T19:35:49.155Z] 38519.00 IOPS, 150.46 MiB/s [2024-12-05T19:35:50.121Z] 33552.50 IOPS, 131.06 MiB/s [2024-12-05T19:35:50.121Z] 32604.80 IOPS, 127.36 MiB/s 00:17:22.866 Latency(us) 00:17:22.866 [2024-12-05T19:35:50.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.866 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:17:22.866 xnvme_bdev : 5.01 32588.60 127.30 0.00 0.00 1958.41 61.44 217781.17 00:17:22.866 [2024-12-05T19:35:50.121Z] =================================================================================================================== 00:17:22.866 [2024-12-05T19:35:50.121Z] Total : 32588.60 127.30 0.00 0.00 1958.41 61.44 217781.17 00:17:23.439 00:17:23.439 real 0m25.622s 00:17:23.439 user 0m17.822s 00:17:23.439 sys 0m6.067s 00:17:23.439 19:35:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.439 ************************************ 00:17:23.439 END TEST xnvme_bdevperf 00:17:23.439 19:35:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:23.439 ************************************ 00:17:23.699 19:35:50 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:23.699 19:35:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:23.699 19:35:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.699 19:35:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:23.699 ************************************ 00:17:23.699 START TEST xnvme_fio_plugin 00:17:23.699 ************************************ 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:23.699 19:35:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.699 { 00:17:23.699 "subsystems": [ 00:17:23.699 { 00:17:23.699 "subsystem": "bdev", 00:17:23.699 "config": [ 00:17:23.699 { 00:17:23.700 "params": { 00:17:23.700 "io_mechanism": "io_uring_cmd", 00:17:23.700 "conserve_cpu": true, 00:17:23.700 "filename": "/dev/ng0n1", 00:17:23.700 "name": "xnvme_bdev" 00:17:23.700 }, 00:17:23.700 "method": "bdev_xnvme_create" 00:17:23.700 }, 00:17:23.700 { 00:17:23.700 "method": "bdev_wait_for_examine" 00:17:23.700 } 00:17:23.700 ] 00:17:23.700 } 00:17:23.700 ] 00:17:23.700 } 00:17:23.700 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:23.700 fio-3.35 00:17:23.700 Starting 1 thread 00:17:30.292 00:17:30.292 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72161: Thu Dec 5 19:35:56 2024 00:17:30.292 read: IOPS=34.7k, BW=135MiB/s (142MB/s)(678MiB/5001msec) 00:17:30.292 slat (nsec): min=2877, max=89518, avg=3888.91, stdev=2334.68 00:17:30.292 clat (usec): min=554, max=3511, avg=1686.71, stdev=323.96 00:17:30.292 lat (usec): min=561, max=3517, avg=1690.60, stdev=324.47 00:17:30.292 clat percentiles (usec): 00:17:30.292 | 1.00th=[ 1074], 5.00th=[ 1221], 10.00th=[ 1303], 20.00th=[ 1418], 00:17:30.292 | 30.00th=[ 1500], 40.00th=[ 1582], 50.00th=[ 1647], 60.00th=[ 1729], 00:17:30.292 | 70.00th=[ 1827], 80.00th=[ 1942], 90.00th=[ 2114], 95.00th=[ 2278], 00:17:30.292 | 99.00th=[ 2638], 99.50th=[ 2769], 99.90th=[ 3261], 99.95th=[ 3359], 00:17:30.292 | 99.99th=[ 3490] 00:17:30.292 bw ( KiB/s): min=131584, max=147456, per=100.00%, avg=139290.67, stdev=5630.93, samples=9 00:17:30.292 iops : min=32896, max=36864, avg=34822.67, stdev=1407.73, samples=9 00:17:30.292 lat (usec) : 750=0.01%, 1000=0.33% 00:17:30.292 lat (msec) : 2=83.81%, 4=15.85% 00:17:30.292 cpu : usr=57.84%, sys=38.94%, ctx=7, majf=0, minf=762 00:17:30.292 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:30.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.292 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:30.292 issued rwts: total=173459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:30.292 00:17:30.292 Run status group 0 (all jobs): 00:17:30.292 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=678MiB (710MB), run=5001-5001msec 00:17:30.554 ----------------------------------------------------- 00:17:30.554 Suppressions used: 00:17:30.554 count bytes template 00:17:30.554 1 11 /usr/src/fio/parse.c 00:17:30.554 1 8 libtcmalloc_minimal.so 00:17:30.554 1 904 libcrypto.so 00:17:30.554 ----------------------------------------------------- 00:17:30.554 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:30.554 19:35:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:30.554 { 00:17:30.554 "subsystems": [ 00:17:30.554 { 00:17:30.554 "subsystem": "bdev", 00:17:30.554 "config": [ 00:17:30.554 { 00:17:30.554 "params": { 00:17:30.554 "io_mechanism": "io_uring_cmd", 00:17:30.554 "conserve_cpu": true, 00:17:30.554 "filename": "/dev/ng0n1", 00:17:30.554 "name": "xnvme_bdev" 00:17:30.554 }, 00:17:30.554 "method": "bdev_xnvme_create" 00:17:30.554 }, 00:17:30.554 { 00:17:30.554 "method": "bdev_wait_for_examine" 00:17:30.554 } 00:17:30.554 ] 00:17:30.554 } 00:17:30.554 ] 00:17:30.554 } 00:17:30.815 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:30.815 fio-3.35 00:17:30.815 Starting 1 thread 00:17:37.492 00:17:37.492 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72250: Thu Dec 5 19:36:03 2024 00:17:37.492 write: IOPS=29.8k, BW=116MiB/s (122MB/s)(582MiB/5001msec); 0 zone resets 00:17:37.492 slat (usec): min=2, max=128, avg= 4.07, stdev= 2.52 00:17:37.492 clat (usec): min=66, max=30877, avg=1985.59, stdev=2591.17 00:17:37.492 lat (usec): min=70, max=30881, avg=1989.67, stdev=2591.31 00:17:37.492 clat percentiles (usec): 00:17:37.492 | 1.00th=[ 717], 5.00th=[ 1156], 10.00th=[ 1254], 20.00th=[ 1385], 00:17:37.492 | 30.00th=[ 1467], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1696], 00:17:37.492 | 70.00th=[ 1778], 80.00th=[ 1893], 90.00th=[ 2114], 95.00th=[ 2311], 00:17:37.492 | 99.00th=[19530], 99.50th=[22152], 99.90th=[26608], 99.95th=[27919], 00:17:37.492 | 99.99th=[30016] 00:17:37.492 bw ( KiB/s): min=84792, max=144824, per=100.00%, avg=123632.89, stdev=22755.20, samples=9 00:17:37.492 iops : min=21198, max=36206, avg=30908.22, stdev=5688.80, samples=9 00:17:37.492 lat (usec) : 100=0.01%, 250=0.19%, 500=0.44%, 750=0.42%, 1000=0.50% 00:17:37.492 lat (msec) : 2=84.58%, 4=11.84%, 10=0.10%, 20=1.05%, 50=0.88% 00:17:37.492 cpu : usr=64.94%, sys=30.94%, ctx=15, majf=0, minf=763 00:17:37.492 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=12.0%, 16=24.1%, 32=50.8%, >=64=2.6% 00:17:37.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.492 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:37.492 issued rwts: total=0,149006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.492 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:37.492 00:17:37.492 Run status group 0 (all jobs): 00:17:37.492 WRITE: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=582MiB (610MB), run=5001-5001msec 00:17:37.492 ----------------------------------------------------- 00:17:37.492 Suppressions used: 00:17:37.492 count bytes template 00:17:37.492 1 11 /usr/src/fio/parse.c 00:17:37.492 1 8 libtcmalloc_minimal.so 00:17:37.492 1 904 libcrypto.so 00:17:37.492 ----------------------------------------------------- 00:17:37.492 00:17:37.492 00:17:37.492 real 0m13.839s 00:17:37.492 user 0m9.043s 00:17:37.492 sys 0m4.092s 00:17:37.492 19:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.492 ************************************ 00:17:37.492 END TEST xnvme_fio_plugin 00:17:37.492 ************************************ 00:17:37.492 19:36:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:37.492 Process with pid 71748 is not found 00:17:37.492 19:36:04 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71748 00:17:37.492 19:36:04 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71748 ']' 00:17:37.492 19:36:04 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71748 00:17:37.492 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71748) - No such process 00:17:37.492 19:36:04 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71748 is not found' 00:17:37.492 00:17:37.492 real 3m28.950s 00:17:37.492 user 1m59.076s 00:17:37.492 sys 1m15.160s 00:17:37.492 19:36:04 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.492 19:36:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:37.492 ************************************ 00:17:37.492 END TEST nvme_xnvme 00:17:37.492 ************************************ 00:17:37.492 19:36:04 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:37.492 19:36:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.492 19:36:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.492 19:36:04 -- common/autotest_common.sh@10 -- # set +x 00:17:37.492 ************************************ 00:17:37.492 START TEST blockdev_xnvme 00:17:37.492 ************************************ 00:17:37.492 19:36:04 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:37.492 * Looking for test storage... 00:17:37.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.755 19:36:04 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:37.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.755 --rc genhtml_branch_coverage=1 00:17:37.755 --rc genhtml_function_coverage=1 00:17:37.755 --rc genhtml_legend=1 00:17:37.755 --rc geninfo_all_blocks=1 00:17:37.755 --rc geninfo_unexecuted_blocks=1 00:17:37.755 00:17:37.755 ' 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:37.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.755 --rc genhtml_branch_coverage=1 00:17:37.755 --rc genhtml_function_coverage=1 00:17:37.755 --rc genhtml_legend=1 00:17:37.755 --rc geninfo_all_blocks=1 00:17:37.755 --rc geninfo_unexecuted_blocks=1 00:17:37.755 00:17:37.755 ' 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:37.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.755 --rc genhtml_branch_coverage=1 00:17:37.755 --rc genhtml_function_coverage=1 00:17:37.755 --rc genhtml_legend=1 00:17:37.755 --rc geninfo_all_blocks=1 00:17:37.755 --rc geninfo_unexecuted_blocks=1 00:17:37.755 00:17:37.755 ' 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:37.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.755 --rc genhtml_branch_coverage=1 00:17:37.755 --rc genhtml_function_coverage=1 00:17:37.755 --rc genhtml_legend=1 00:17:37.755 --rc geninfo_all_blocks=1 00:17:37.755 --rc geninfo_unexecuted_blocks=1 00:17:37.755 00:17:37.755 ' 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72386 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72386 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72386 ']' 00:17:37.755 19:36:04 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.755 19:36:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:37.755 [2024-12-05 19:36:04.916970] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:17:37.756 [2024-12-05 19:36:04.917387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72386 ] 00:17:38.026 [2024-12-05 19:36:05.075700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.026 [2024-12-05 19:36:05.205822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.971 19:36:05 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.971 19:36:05 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:17:38.971 19:36:05 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:17:38.971 19:36:05 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:17:38.971 19:36:05 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:17:38.971 19:36:05 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:17:38.971 19:36:05 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:39.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:39.807 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:17:39.807 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:17:39.807 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:17:39.807 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:17:39.807 19:36:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:39.807 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:39.808 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:17:39.808 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:17:39.808 19:36:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.808 19:36:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:39.808 19:36:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:17:39.808 nvme0n1 00:17:39.808 nvme0n2 00:17:39.808 nvme0n3 00:17:39.808 nvme1n1 00:17:39.808 nvme2n1 00:17:39.808 nvme3n1 00:17:39.808 19:36:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.808 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:17:39.808 19:36:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.808 19:36:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:39.808 19:36:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.808 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:17:39.808 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:17:39.808 19:36:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.808 19:36:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:40.071 19:36:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.071 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:17:40.071 19:36:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.071 19:36:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:40.071 19:36:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.071 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:40.071 19:36:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.071 19:36:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:40.071 19:36:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.071 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:17:40.071 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:17:40.071 19:36:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.071 19:36:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:40.071 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:17:40.071 19:36:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.071 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:17:40.072 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a6648c6f-5a84-429c-ac90-bd7b21911f95"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a6648c6f-5a84-429c-ac90-bd7b21911f95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "9a20e29c-3736-495c-b706-a80a2c7442df"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9a20e29c-3736-495c-b706-a80a2c7442df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "ef83a35f-84d3-429c-ab23-962352931eda"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ef83a35f-84d3-429c-ab23-962352931eda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "79815a33-9fa8-4537-b5a0-33fb5ca6f96c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "79815a33-9fa8-4537-b5a0-33fb5ca6f96c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "03c72e1a-0ee4-4dc5-a3e6-e9e496d511b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "03c72e1a-0ee4-4dc5-a3e6-e9e496d511b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f76ca996-7af6-4cbb-8ef3-3ba82d63cd03"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f76ca996-7af6-4cbb-8ef3-3ba82d63cd03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:40.072 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:17:40.072 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:17:40.072 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:17:40.072 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:17:40.072 19:36:07 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72386 00:17:40.072 19:36:07 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72386 ']' 00:17:40.072 19:36:07 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72386 00:17:40.072 19:36:07 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:17:40.072 19:36:07 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.072 19:36:07 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72386 00:17:40.072 killing process with pid 72386 00:17:40.072 19:36:07 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.072 19:36:07 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.072 19:36:07 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72386' 00:17:40.072 19:36:07 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72386 00:17:40.072 19:36:07 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72386 00:17:41.989 19:36:08 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:41.989 19:36:08 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:41.989 19:36:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:41.989 19:36:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.989 19:36:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:41.990 ************************************ 00:17:41.990 START TEST bdev_hello_world 00:17:41.990 ************************************ 00:17:41.990 19:36:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:41.990 [2024-12-05 19:36:08.954553] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:17:41.990 [2024-12-05 19:36:08.954956] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72674 ] 00:17:41.990 [2024-12-05 19:36:09.120085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.250 [2024-12-05 19:36:09.242424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.511 [2024-12-05 19:36:09.649636] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:42.511 [2024-12-05 19:36:09.649891] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:17:42.511 [2024-12-05 19:36:09.649928] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:42.511 [2024-12-05 19:36:09.652036] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:42.511 [2024-12-05 19:36:09.652710] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:42.511 [2024-12-05 19:36:09.652748] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:42.511 [2024-12-05 19:36:09.653578] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:42.511 00:17:42.511 [2024-12-05 19:36:09.653751] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:43.450 00:17:43.450 real 0m1.565s 00:17:43.450 user 0m1.176s 00:17:43.450 sys 0m0.239s 00:17:43.450 19:36:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.450 ************************************ 00:17:43.450 END TEST bdev_hello_world 00:17:43.450 ************************************ 00:17:43.450 19:36:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:43.450 19:36:10 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:17:43.450 19:36:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:43.450 19:36:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.450 19:36:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:43.450 ************************************ 00:17:43.450 START TEST bdev_bounds 00:17:43.450 ************************************ 00:17:43.450 19:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:43.450 Process bdevio pid: 72706 00:17:43.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.450 19:36:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72706 00:17:43.450 19:36:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:43.450 19:36:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72706' 00:17:43.450 19:36:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:43.450 19:36:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72706 00:17:43.450 19:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72706 ']' 00:17:43.450 19:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.450 19:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.450 19:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.450 19:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.450 19:36:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:43.450 [2024-12-05 19:36:10.582358] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:17:43.450 [2024-12-05 19:36:10.582524] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72706 ] 00:17:43.706 [2024-12-05 19:36:10.742390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:43.706 [2024-12-05 19:36:10.840014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.706 [2024-12-05 19:36:10.840197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.706 [2024-12-05 19:36:10.840258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.271 19:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.271 19:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:44.271 19:36:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:44.271 I/O targets: 00:17:44.271 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:44.271 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:44.271 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:44.271 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:44.271 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:44.271 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:44.271 00:17:44.271 00:17:44.271 CUnit - A unit testing framework for C - Version 2.1-3 00:17:44.271 http://cunit.sourceforge.net/ 00:17:44.271 00:17:44.271 00:17:44.271 Suite: bdevio tests on: nvme3n1 00:17:44.271 Test: blockdev write read block ...passed 00:17:44.271 Test: blockdev write zeroes read block ...passed 00:17:44.271 Test: blockdev write zeroes read no split ...passed 00:17:44.530 Test: blockdev write zeroes read split ...passed 00:17:44.530 Test: blockdev write zeroes read split partial ...passed 00:17:44.530 Test: blockdev reset ...passed 00:17:44.530 Test: blockdev write read 8 blocks ...passed 00:17:44.530 Test: blockdev write read size > 128k ...passed 00:17:44.530 Test: blockdev write read invalid size ...passed 00:17:44.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:44.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:44.530 Test: blockdev write read max offset ...passed 00:17:44.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:44.530 Test: blockdev writev readv 8 blocks ...passed 00:17:44.530 Test: blockdev writev readv 30 x 1block ...passed 00:17:44.530 Test: blockdev writev readv block ...passed 00:17:44.530 Test: blockdev writev readv size > 128k ...passed 00:17:44.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:44.530 Test: blockdev comparev and writev ...passed 00:17:44.530 Test: blockdev nvme passthru rw ...passed 00:17:44.530 Test: blockdev nvme passthru vendor specific ...passed 00:17:44.530 Test: blockdev nvme admin passthru ...passed 00:17:44.530 Test: blockdev copy ...passed 00:17:44.530 Suite: bdevio tests on: nvme2n1 00:17:44.530 Test: blockdev write read block ...passed 00:17:44.530 Test: blockdev write zeroes read block ...passed 00:17:44.530 Test: blockdev write zeroes read no split ...passed 00:17:44.530 Test: blockdev write zeroes read split ...passed 00:17:44.530 Test: blockdev write zeroes read split partial ...passed 00:17:44.530 Test: blockdev reset ...passed 00:17:44.530 Test: blockdev write read 8 blocks ...passed 00:17:44.530 Test: blockdev write read size > 128k ...passed 00:17:44.530 Test: blockdev write read invalid size ...passed 00:17:44.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:44.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:44.530 Test: blockdev write read max offset ...passed 00:17:44.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:44.530 Test: blockdev writev readv 8 blocks ...passed 00:17:44.530 Test: blockdev writev readv 30 x 1block ...passed 00:17:44.530 Test: blockdev writev readv block ...passed 00:17:44.530 Test: blockdev writev readv size > 128k ...passed 00:17:44.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:44.530 Test: blockdev comparev and writev ...passed 00:17:44.530 Test: blockdev nvme passthru rw ...passed 00:17:44.530 Test: blockdev nvme passthru vendor specific ...passed 00:17:44.530 Test: blockdev nvme admin passthru ...passed 00:17:44.530 Test: blockdev copy ...passed 00:17:44.530 Suite: bdevio tests on: nvme1n1 00:17:44.530 Test: blockdev write read block ...passed 00:17:44.530 Test: blockdev write zeroes read block ...passed 00:17:44.530 Test: blockdev write zeroes read no split ...passed 00:17:44.530 Test: blockdev write zeroes read split ...passed 00:17:44.530 Test: blockdev write zeroes read split partial ...passed 00:17:44.530 Test: blockdev reset ...passed 00:17:44.530 Test: blockdev write read 8 blocks ...passed 00:17:44.530 Test: blockdev write read size > 128k ...passed 00:17:44.530 Test: blockdev write read invalid size ...passed 00:17:44.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:44.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:44.530 Test: blockdev write read max offset ...passed 00:17:44.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:44.530 Test: blockdev writev readv 8 blocks ...passed 00:17:44.530 Test: blockdev writev readv 30 x 1block ...passed 00:17:44.530 Test: blockdev writev readv block ...passed 00:17:44.530 Test: blockdev writev readv size > 128k ...passed 00:17:44.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:44.530 Test: blockdev comparev and writev ...passed 00:17:44.530 Test: blockdev nvme passthru rw ...passed 00:17:44.530 Test: blockdev nvme passthru vendor specific ...passed 00:17:44.530 Test: blockdev nvme admin passthru ...passed 00:17:44.530 Test: blockdev copy ...passed 00:17:44.530 Suite: bdevio tests on: nvme0n3 00:17:44.530 Test: blockdev write read block ...passed 00:17:44.530 Test: blockdev write zeroes read block ...passed 00:17:44.530 Test: blockdev write zeroes read no split ...passed 00:17:44.530 Test: blockdev write zeroes read split ...passed 00:17:44.530 Test: blockdev write zeroes read split partial ...passed 00:17:44.530 Test: blockdev reset ...passed 00:17:44.530 Test: blockdev write read 8 blocks ...passed 00:17:44.530 Test: blockdev write read size > 128k ...passed 00:17:44.530 Test: blockdev write read invalid size ...passed 00:17:44.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:44.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:44.530 Test: blockdev write read max offset ...passed 00:17:44.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:44.530 Test: blockdev writev readv 8 blocks ...passed 00:17:44.530 Test: blockdev writev readv 30 x 1block ...passed 00:17:44.530 Test: blockdev writev readv block ...passed 00:17:44.530 Test: blockdev writev readv size > 128k ...passed 00:17:44.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:44.530 Test: blockdev comparev and writev ...passed 00:17:44.530 Test: blockdev nvme passthru rw ...passed 00:17:44.530 Test: blockdev nvme passthru vendor specific ...passed 00:17:44.530 Test: blockdev nvme admin passthru ...passed 00:17:44.530 Test: blockdev copy ...passed 00:17:44.530 Suite: bdevio tests on: nvme0n2 00:17:44.530 Test: blockdev write read block ...passed 00:17:44.530 Test: blockdev write zeroes read block ...passed 00:17:44.530 Test: blockdev write zeroes read no split ...passed 00:17:44.530 Test: blockdev write zeroes read split ...passed 00:17:44.530 Test: blockdev write zeroes read split partial ...passed 00:17:44.789 Test: blockdev reset ...passed 00:17:44.789 Test: blockdev write read 8 blocks ...passed 00:17:44.789 Test: blockdev write read size > 128k ...passed 00:17:44.789 Test: blockdev write read invalid size ...passed 00:17:44.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:44.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:44.789 Test: blockdev write read max offset ...passed 00:17:44.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:44.789 Test: blockdev writev readv 8 blocks ...passed 00:17:44.789 Test: blockdev writev readv 30 x 1block ...passed 00:17:44.789 Test: blockdev writev readv block ...passed 00:17:44.789 Test: blockdev writev readv size > 128k ...passed 00:17:44.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:44.789 Test: blockdev comparev and writev ...passed 00:17:44.789 Test: blockdev nvme passthru rw ...passed 00:17:44.789 Test: blockdev nvme passthru vendor specific ...passed 00:17:44.789 Test: blockdev nvme admin passthru ...passed 00:17:44.789 Test: blockdev copy ...passed 00:17:44.789 Suite: bdevio tests on: nvme0n1 00:17:44.789 Test: blockdev write read block ...passed 00:17:44.789 Test: blockdev write zeroes read block ...passed 00:17:44.789 Test: blockdev write zeroes read no split ...passed 00:17:44.789 Test: blockdev write zeroes read split ...passed 00:17:44.789 Test: blockdev write zeroes read split partial ...passed 00:17:44.789 Test: blockdev reset ...passed 00:17:44.790 Test: blockdev write read 8 blocks ...passed 00:17:44.790 Test: blockdev write read size > 128k ...passed 00:17:44.790 Test: blockdev write read invalid size ...passed 00:17:44.790 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:44.790 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:44.790 Test: blockdev write read max offset ...passed 00:17:44.790 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:44.790 Test: blockdev writev readv 8 blocks ...passed 00:17:44.790 Test: blockdev writev readv 30 x 1block ...passed 00:17:44.790 Test: blockdev writev readv block ...passed 00:17:44.790 Test: blockdev writev readv size > 128k ...passed 00:17:44.790 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:44.790 Test: blockdev comparev and writev ...passed 00:17:44.790 Test: blockdev nvme passthru rw ...passed 00:17:44.790 Test: blockdev nvme passthru vendor specific ...passed 00:17:44.790 Test: blockdev nvme admin passthru ...passed 00:17:44.790 Test: blockdev copy ...passed 00:17:44.790 00:17:44.790 Run Summary: Type Total Ran Passed Failed Inactive 00:17:44.790 suites 6 6 n/a 0 0 00:17:44.790 tests 138 138 138 0 0 00:17:44.790 asserts 780 780 780 0 n/a 00:17:44.790 00:17:44.790 Elapsed time = 0.937 seconds 00:17:44.790 0 00:17:44.790 19:36:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72706 00:17:44.790 19:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72706 ']' 00:17:44.790 19:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72706 00:17:44.790 19:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:44.790 19:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.790 19:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72706 00:17:44.790 killing process with pid 72706 00:17:44.790 19:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.790 19:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.790 19:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72706' 00:17:44.790 19:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72706 00:17:44.790 19:36:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72706 00:17:45.356 ************************************ 00:17:45.356 END TEST bdev_bounds 00:17:45.356 ************************************ 00:17:45.356 19:36:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:45.356 00:17:45.356 real 0m2.095s 00:17:45.356 user 0m5.267s 00:17:45.356 sys 0m0.269s 00:17:45.356 19:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.356 19:36:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:45.615 19:36:12 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:45.615 19:36:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:45.615 19:36:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.615 19:36:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:45.615 ************************************ 00:17:45.615 START TEST bdev_nbd 00:17:45.615 ************************************ 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72761 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72761 /var/tmp/spdk-nbd.sock 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72761 ']' 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:45.615 19:36:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:45.615 [2024-12-05 19:36:12.716929] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:17:45.615 [2024-12-05 19:36:12.717050] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.874 [2024-12-05 19:36:12.872498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.874 [2024-12-05 19:36:12.971403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:46.441 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:17:46.699 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:46.699 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:46.699 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:46.699 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:46.699 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:46.699 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:46.699 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:46.699 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:46.700 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:46.700 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:46.700 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:46.700 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.700 1+0 records in 00:17:46.700 1+0 records out 00:17:46.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536267 s, 7.6 MB/s 00:17:46.700 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.700 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:46.700 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.700 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:46.700 19:36:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:46.700 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:46.700 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:46.700 19:36:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.958 1+0 records in 00:17:46.958 1+0 records out 00:17:46.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508378 s, 8.1 MB/s 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:46.958 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:46.959 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:46.959 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.218 1+0 records in 00:17:47.218 1+0 records out 00:17:47.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362542 s, 11.3 MB/s 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:17:47.218 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.477 1+0 records in 00:17:47.477 1+0 records out 00:17:47.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502269 s, 8.2 MB/s 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.477 1+0 records in 00:17:47.477 1+0 records out 00:17:47.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108778 s, 3.8 MB/s 00:17:47.477 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.478 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:47.478 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.478 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.478 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:47.478 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:47.478 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:47.478 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.749 1+0 records in 00:17:47.749 1+0 records out 00:17:47.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00074807 s, 5.5 MB/s 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:47.749 19:36:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:48.050 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:48.050 { 00:17:48.050 "nbd_device": "/dev/nbd0", 00:17:48.050 "bdev_name": "nvme0n1" 00:17:48.050 }, 00:17:48.050 { 00:17:48.050 "nbd_device": "/dev/nbd1", 00:17:48.050 "bdev_name": "nvme0n2" 00:17:48.050 }, 00:17:48.050 { 00:17:48.050 "nbd_device": "/dev/nbd2", 00:17:48.050 "bdev_name": "nvme0n3" 00:17:48.050 }, 00:17:48.050 { 00:17:48.050 "nbd_device": "/dev/nbd3", 00:17:48.050 "bdev_name": "nvme1n1" 00:17:48.050 }, 00:17:48.050 { 00:17:48.050 "nbd_device": "/dev/nbd4", 00:17:48.050 "bdev_name": "nvme2n1" 00:17:48.050 }, 00:17:48.050 { 00:17:48.050 "nbd_device": "/dev/nbd5", 00:17:48.050 "bdev_name": "nvme3n1" 00:17:48.050 } 00:17:48.050 ]' 00:17:48.050 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:48.050 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:48.050 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:48.050 { 00:17:48.050 "nbd_device": "/dev/nbd0", 00:17:48.050 "bdev_name": "nvme0n1" 00:17:48.050 }, 00:17:48.050 { 00:17:48.050 "nbd_device": "/dev/nbd1", 00:17:48.050 "bdev_name": "nvme0n2" 00:17:48.050 }, 00:17:48.050 { 00:17:48.050 "nbd_device": "/dev/nbd2", 00:17:48.050 "bdev_name": "nvme0n3" 00:17:48.050 }, 00:17:48.050 { 00:17:48.050 "nbd_device": "/dev/nbd3", 00:17:48.050 "bdev_name": "nvme1n1" 00:17:48.050 }, 00:17:48.050 { 00:17:48.050 "nbd_device": "/dev/nbd4", 00:17:48.050 "bdev_name": "nvme2n1" 00:17:48.050 }, 00:17:48.050 { 00:17:48.050 "nbd_device": "/dev/nbd5", 00:17:48.050 "bdev_name": "nvme3n1" 00:17:48.050 } 00:17:48.050 ]' 00:17:48.050 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:48.050 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:48.050 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:48.050 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:48.050 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:48.050 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.050 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:48.309 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:48.309 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:48.309 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:48.309 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:48.309 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:48.309 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:48.309 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:48.309 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:48.309 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.309 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:48.569 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:48.570 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:48.570 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:48.570 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.570 19:36:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:48.830 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:48.830 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:48.830 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:48.830 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:48.830 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:48.830 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:48.830 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:48.830 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:48.830 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.830 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:49.090 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:49.090 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:49.090 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:49.090 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.090 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.090 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:49.090 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:49.090 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.090 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.090 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:49.351 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:49.351 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:49.351 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:49.351 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.351 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.351 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:49.351 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:49.351 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.351 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:49.351 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:49.351 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:49.613 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:17:49.874 /dev/nbd0 00:17:49.874 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:49.874 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:49.874 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:49.874 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:49.874 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:49.874 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:49.875 1+0 records in 00:17:49.875 1+0 records out 00:17:49.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134649 s, 3.0 MB/s 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:49.875 19:36:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:17:50.135 /dev/nbd1 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:50.135 1+0 records in 00:17:50.135 1+0 records out 00:17:50.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000951185 s, 4.3 MB/s 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:50.135 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:17:50.397 /dev/nbd10 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:50.397 1+0 records in 00:17:50.397 1+0 records out 00:17:50.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119342 s, 3.4 MB/s 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:50.397 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:50.398 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:50.398 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:17:50.660 /dev/nbd11 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:50.660 1+0 records in 00:17:50.660 1+0 records out 00:17:50.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000974451 s, 4.2 MB/s 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:50.660 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:17:50.921 /dev/nbd12 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:50.921 1+0 records in 00:17:50.921 1+0 records out 00:17:50.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138569 s, 3.0 MB/s 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:50.921 19:36:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:17:51.183 /dev/nbd13 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:51.183 1+0 records in 00:17:51.183 1+0 records out 00:17:51.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113174 s, 3.6 MB/s 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:51.183 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:51.444 { 00:17:51.444 "nbd_device": "/dev/nbd0", 00:17:51.444 "bdev_name": "nvme0n1" 00:17:51.444 }, 00:17:51.444 { 00:17:51.444 "nbd_device": "/dev/nbd1", 00:17:51.444 "bdev_name": "nvme0n2" 00:17:51.444 }, 00:17:51.444 { 00:17:51.444 "nbd_device": "/dev/nbd10", 00:17:51.444 "bdev_name": "nvme0n3" 00:17:51.444 }, 00:17:51.444 { 00:17:51.444 "nbd_device": "/dev/nbd11", 00:17:51.444 "bdev_name": "nvme1n1" 00:17:51.444 }, 00:17:51.444 { 00:17:51.444 "nbd_device": "/dev/nbd12", 00:17:51.444 "bdev_name": "nvme2n1" 00:17:51.444 }, 00:17:51.444 { 00:17:51.444 "nbd_device": "/dev/nbd13", 00:17:51.444 "bdev_name": "nvme3n1" 00:17:51.444 } 00:17:51.444 ]' 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:51.444 { 00:17:51.444 "nbd_device": "/dev/nbd0", 00:17:51.444 "bdev_name": "nvme0n1" 00:17:51.444 }, 00:17:51.444 { 00:17:51.444 "nbd_device": "/dev/nbd1", 00:17:51.444 "bdev_name": "nvme0n2" 00:17:51.444 }, 00:17:51.444 { 00:17:51.444 "nbd_device": "/dev/nbd10", 00:17:51.444 "bdev_name": "nvme0n3" 00:17:51.444 }, 00:17:51.444 { 00:17:51.444 "nbd_device": "/dev/nbd11", 00:17:51.444 "bdev_name": "nvme1n1" 00:17:51.444 }, 00:17:51.444 { 00:17:51.444 "nbd_device": "/dev/nbd12", 00:17:51.444 "bdev_name": "nvme2n1" 00:17:51.444 }, 00:17:51.444 { 00:17:51.444 "nbd_device": "/dev/nbd13", 00:17:51.444 "bdev_name": "nvme3n1" 00:17:51.444 } 00:17:51.444 ]' 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:51.444 /dev/nbd1 00:17:51.444 /dev/nbd10 00:17:51.444 /dev/nbd11 00:17:51.444 /dev/nbd12 00:17:51.444 /dev/nbd13' 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:51.444 /dev/nbd1 00:17:51.444 /dev/nbd10 00:17:51.444 /dev/nbd11 00:17:51.444 /dev/nbd12 00:17:51.444 /dev/nbd13' 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:51.444 256+0 records in 00:17:51.444 256+0 records out 00:17:51.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00716245 s, 146 MB/s 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:51.444 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:51.705 256+0 records in 00:17:51.705 256+0 records out 00:17:51.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247036 s, 4.2 MB/s 00:17:51.705 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:51.705 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:51.963 256+0 records in 00:17:51.964 256+0 records out 00:17:51.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.210459 s, 5.0 MB/s 00:17:51.964 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:51.964 19:36:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:51.964 256+0 records in 00:17:51.964 256+0 records out 00:17:51.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0800585 s, 13.1 MB/s 00:17:51.964 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:51.964 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:51.964 256+0 records in 00:17:51.964 256+0 records out 00:17:51.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0749215 s, 14.0 MB/s 00:17:51.964 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:51.964 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:51.964 256+0 records in 00:17:51.964 256+0 records out 00:17:51.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.07186 s, 14.6 MB/s 00:17:51.964 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:51.964 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:52.222 256+0 records in 00:17:52.222 256+0 records out 00:17:52.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0641331 s, 16.3 MB/s 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:52.222 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:52.480 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:52.738 19:36:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:52.996 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:52.996 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:52.996 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:52.996 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:52.996 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:52.996 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:52.996 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:52.996 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:52.996 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:52.996 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:53.254 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:53.254 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:53.254 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:53.254 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:53.254 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:53.254 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:53.254 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:53.254 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:53.254 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:53.254 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:53.511 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:53.512 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:53.512 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:53.512 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:53.512 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:53.512 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:53.512 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:53.512 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:53.512 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:53.512 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:53.512 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:53.770 malloc_lvol_verify 00:17:53.770 19:36:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:54.029 789c528b-0431-49f7-ac3e-741684483f6d 00:17:54.029 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:54.287 71e1391b-d040-4760-a6fe-551e2cfc6e7f 00:17:54.287 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:54.546 /dev/nbd0 00:17:54.546 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:54.546 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:54.546 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:54.546 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:54.546 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:54.546 mke2fs 1.47.0 (5-Feb-2023) 00:17:54.546 Discarding device blocks: 0/4096 done 00:17:54.546 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:54.546 00:17:54.546 Allocating group tables: 0/1 done 00:17:54.546 Writing inode tables: 0/1 done 00:17:54.546 Creating journal (1024 blocks): done 00:17:54.546 Writing superblocks and filesystem accounting information: 0/1 done 00:17:54.546 00:17:54.546 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:54.546 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:54.546 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:54.546 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:54.546 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:54.546 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:54.546 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72761 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72761 ']' 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72761 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72761 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.804 killing process with pid 72761 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72761' 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72761 00:17:54.804 19:36:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72761 00:17:55.371 19:36:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:55.371 00:17:55.371 real 0m9.790s 00:17:55.371 user 0m13.750s 00:17:55.371 sys 0m3.283s 00:17:55.371 19:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.371 19:36:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:55.371 ************************************ 00:17:55.371 END TEST bdev_nbd 00:17:55.371 ************************************ 00:17:55.371 19:36:22 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:17:55.371 19:36:22 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:17:55.371 19:36:22 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:17:55.371 19:36:22 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:17:55.371 19:36:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:55.371 19:36:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.371 19:36:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:55.371 ************************************ 00:17:55.371 START TEST bdev_fio 00:17:55.371 ************************************ 00:17:55.371 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:55.371 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:55.371 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:55.371 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:55.371 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:55.371 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:55.371 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:55.371 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:55.371 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:55.371 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:55.372 ************************************ 00:17:55.372 START TEST bdev_fio_rw_verify 00:17:55.372 ************************************ 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:55.372 19:36:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:55.630 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:55.630 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:55.630 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:55.630 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:55.630 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:55.630 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:55.630 fio-3.35 00:17:55.630 Starting 6 threads 00:18:07.851 00:18:07.851 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=73166: Thu Dec 5 19:36:33 2024 00:18:07.852 read: IOPS=27.9k, BW=109MiB/s (114MB/s)(1092MiB/10003msec) 00:18:07.852 slat (usec): min=2, max=1882, avg= 5.14, stdev=11.09 00:18:07.852 clat (usec): min=57, max=474728, avg=637.15, stdev=2603.67 00:18:07.852 lat (usec): min=60, max=474736, avg=642.29, stdev=2603.86 00:18:07.852 clat percentiles (usec): 00:18:07.852 | 50.000th=[ 408], 99.000th=[ 2802], 99.900th=[ 4178], 00:18:07.852 | 99.990th=[ 5735], 99.999th=[476054] 00:18:07.852 write: IOPS=28.2k, BW=110MiB/s (116MB/s)(1103MiB/10003msec); 0 zone resets 00:18:07.852 slat (usec): min=10, max=4426, avg=29.04, stdev=90.21 00:18:07.852 clat (usec): min=56, max=7182, avg=820.26, stdev=686.32 00:18:07.852 lat (usec): min=78, max=7197, avg=849.30, stdev=699.61 00:18:07.852 clat percentiles (usec): 00:18:07.852 | 50.000th=[ 570], 99.000th=[ 3294], 99.900th=[ 4621], 99.990th=[ 5800], 00:18:07.852 | 99.999th=[ 7177] 00:18:07.852 bw ( KiB/s): min=53264, max=207788, per=100.00%, avg=115906.53, stdev=7661.52, samples=114 00:18:07.852 iops : min=13316, max=51946, avg=28975.84, stdev=1915.34, samples=114 00:18:07.852 lat (usec) : 100=0.14%, 250=15.94%, 500=35.44%, 750=18.23%, 1000=8.54% 00:18:07.852 lat (msec) : 2=15.99%, 4=5.49%, 10=0.22%, 500=0.01% 00:18:07.852 cpu : usr=44.76%, sys=31.72%, ctx=7623, majf=0, minf=23945 00:18:07.852 IO depths : 1=11.4%, 2=23.7%, 4=51.2%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.852 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.852 issued rwts: total=279527,282425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.852 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:07.852 00:18:07.852 Run status group 0 (all jobs): 00:18:07.852 READ: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=1092MiB (1145MB), run=10003-10003msec 00:18:07.852 WRITE: bw=110MiB/s (116MB/s), 110MiB/s-110MiB/s (116MB/s-116MB/s), io=1103MiB (1157MB), run=10003-10003msec 00:18:07.852 ----------------------------------------------------- 00:18:07.852 Suppressions used: 00:18:07.852 count bytes template 00:18:07.852 6 48 /usr/src/fio/parse.c 00:18:07.852 2712 260352 /usr/src/fio/iolog.c 00:18:07.852 1 8 libtcmalloc_minimal.so 00:18:07.852 1 904 libcrypto.so 00:18:07.852 ----------------------------------------------------- 00:18:07.852 00:18:07.852 00:18:07.852 real 0m11.844s 00:18:07.852 user 0m28.306s 00:18:07.852 sys 0m19.303s 00:18:07.852 ************************************ 00:18:07.852 END TEST bdev_fio_rw_verify 00:18:07.852 ************************************ 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:07.852 19:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:07.853 19:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a6648c6f-5a84-429c-ac90-bd7b21911f95"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a6648c6f-5a84-429c-ac90-bd7b21911f95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "9a20e29c-3736-495c-b706-a80a2c7442df"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9a20e29c-3736-495c-b706-a80a2c7442df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "ef83a35f-84d3-429c-ab23-962352931eda"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ef83a35f-84d3-429c-ab23-962352931eda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "79815a33-9fa8-4537-b5a0-33fb5ca6f96c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "79815a33-9fa8-4537-b5a0-33fb5ca6f96c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "03c72e1a-0ee4-4dc5-a3e6-e9e496d511b8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "03c72e1a-0ee4-4dc5-a3e6-e9e496d511b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f76ca996-7af6-4cbb-8ef3-3ba82d63cd03"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f76ca996-7af6-4cbb-8ef3-3ba82d63cd03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:07.853 19:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:07.853 19:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:07.853 /home/vagrant/spdk_repo/spdk 00:18:07.853 19:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:07.853 19:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:07.853 19:36:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:07.853 00:18:07.853 real 0m12.006s 00:18:07.853 user 0m28.383s 00:18:07.853 sys 0m19.374s 00:18:07.853 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.853 ************************************ 00:18:07.853 END TEST bdev_fio 00:18:07.853 ************************************ 00:18:07.853 19:36:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:07.853 19:36:34 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:07.853 19:36:34 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:07.853 19:36:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:07.853 19:36:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.853 19:36:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:07.853 ************************************ 00:18:07.853 START TEST bdev_verify 00:18:07.853 ************************************ 00:18:07.853 19:36:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:07.853 [2024-12-05 19:36:34.617062] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:18:07.853 [2024-12-05 19:36:34.617203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73342 ] 00:18:07.853 [2024-12-05 19:36:34.781202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:07.853 [2024-12-05 19:36:34.904122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.853 [2024-12-05 19:36:34.904215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.113 Running I/O for 5 seconds... 00:18:10.446 22887.00 IOPS, 89.40 MiB/s [2024-12-05T19:36:38.646Z] 23571.00 IOPS, 92.07 MiB/s [2024-12-05T19:36:39.589Z] 24279.33 IOPS, 94.84 MiB/s [2024-12-05T19:36:40.533Z] 24217.50 IOPS, 94.60 MiB/s [2024-12-05T19:36:40.533Z] 24002.20 IOPS, 93.76 MiB/s 00:18:13.278 Latency(us) 00:18:13.278 [2024-12-05T19:36:40.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.278 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:13.278 Verification LBA range: start 0x0 length 0x80000 00:18:13.278 nvme0n1 : 5.08 1789.00 6.99 0.00 0.00 71419.75 10637.00 98808.12 00:18:13.278 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:13.278 Verification LBA range: start 0x80000 length 0x80000 00:18:13.278 nvme0n1 : 5.05 1927.45 7.53 0.00 0.00 66291.02 6074.68 68964.04 00:18:13.278 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:13.278 Verification LBA range: start 0x0 length 0x80000 00:18:13.278 nvme0n2 : 5.07 1842.96 7.20 0.00 0.00 69204.16 7259.37 75820.11 00:18:13.278 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:13.278 Verification LBA range: start 0x80000 length 0x80000 00:18:13.278 nvme0n2 : 5.05 1924.52 7.52 0.00 0.00 66285.04 10989.88 62914.56 00:18:13.278 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:13.278 Verification LBA range: start 0x0 length 0x80000 00:18:13.278 nvme0n3 : 5.04 1828.12 7.14 0.00 0.00 69633.97 11746.07 70173.93 00:18:13.278 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:13.278 Verification LBA range: start 0x80000 length 0x80000 00:18:13.278 nvme0n3 : 5.06 1923.94 7.52 0.00 0.00 66186.83 13208.02 56865.08 00:18:13.278 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:13.278 Verification LBA range: start 0x0 length 0x20000 00:18:13.278 nvme1n1 : 5.08 1838.23 7.18 0.00 0.00 69131.66 11241.94 62511.26 00:18:13.278 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:13.278 Verification LBA range: start 0x20000 length 0x20000 00:18:13.278 nvme1n1 : 5.06 1920.89 7.50 0.00 0.00 66177.12 8872.57 67754.14 00:18:13.278 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:13.278 Verification LBA range: start 0x0 length 0xbd0bd 00:18:13.278 nvme2n1 : 5.08 2372.84 9.27 0.00 0.00 53390.94 5570.56 61301.37 00:18:13.278 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:13.278 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:18:13.278 nvme2n1 : 5.07 2483.68 9.70 0.00 0.00 50981.75 5620.97 57268.38 00:18:13.278 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:13.278 Verification LBA range: start 0x0 length 0xa0000 00:18:13.278 nvme3n1 : 5.09 1887.66 7.37 0.00 0.00 66995.42 8670.92 69367.34 00:18:13.278 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:13.278 Verification LBA range: start 0xa0000 length 0xa0000 00:18:13.278 nvme3n1 : 5.07 1969.04 7.69 0.00 0.00 64352.59 3932.16 63721.16 00:18:13.278 [2024-12-05T19:36:40.533Z] =================================================================================================================== 00:18:13.278 [2024-12-05T19:36:40.533Z] Total : 23708.32 92.61 0.00 0.00 64360.64 3932.16 98808.12 00:18:14.222 00:18:14.222 real 0m6.745s 00:18:14.222 user 0m10.916s 00:18:14.222 sys 0m1.468s 00:18:14.222 ************************************ 00:18:14.222 END TEST bdev_verify 00:18:14.222 ************************************ 00:18:14.222 19:36:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.222 19:36:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:14.222 19:36:41 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:14.222 19:36:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:14.222 19:36:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.222 19:36:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:14.222 ************************************ 00:18:14.222 START TEST bdev_verify_big_io 00:18:14.222 ************************************ 00:18:14.222 19:36:41 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:14.223 [2024-12-05 19:36:41.434789] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:18:14.223 [2024-12-05 19:36:41.434937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73436 ] 00:18:14.484 [2024-12-05 19:36:41.595020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:14.485 [2024-12-05 19:36:41.717794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.485 [2024-12-05 19:36:41.717808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.058 Running I/O for 5 seconds... 00:18:21.180 1810.00 IOPS, 113.12 MiB/s [2024-12-05T19:36:48.435Z] 3049.00 IOPS, 190.56 MiB/s 00:18:21.180 Latency(us) 00:18:21.180 [2024-12-05T19:36:48.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.180 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:21.180 Verification LBA range: start 0x0 length 0x8000 00:18:21.180 nvme0n1 : 5.81 96.32 6.02 0.00 0.00 1279663.14 221007.56 2168132.53 00:18:21.180 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:21.180 Verification LBA range: start 0x8000 length 0x8000 00:18:21.180 nvme0n1 : 5.81 121.21 7.58 0.00 0.00 1032579.11 50815.61 1071160.71 00:18:21.180 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:21.180 Verification LBA range: start 0x0 length 0x8000 00:18:21.180 nvme0n2 : 5.82 107.29 6.71 0.00 0.00 1106524.30 168578.76 1703532.70 00:18:21.180 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:21.180 Verification LBA range: start 0x8000 length 0x8000 00:18:21.180 nvme0n2 : 5.88 106.20 6.64 0.00 0.00 1135918.88 65737.65 1858399.31 00:18:21.180 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:21.180 Verification LBA range: start 0x0 length 0x8000 00:18:21.180 nvme0n3 : 5.88 100.67 6.29 0.00 0.00 1162644.75 75013.51 2310093.59 00:18:21.180 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:21.180 Verification LBA range: start 0x8000 length 0x8000 00:18:21.180 nvme0n3 : 5.88 119.74 7.48 0.00 0.00 978526.38 56058.49 1206669.00 00:18:21.180 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:21.180 Verification LBA range: start 0x0 length 0x2000 00:18:21.180 nvme1n1 : 5.88 130.56 8.16 0.00 0.00 860718.82 6200.71 942105.21 00:18:21.180 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:21.180 Verification LBA range: start 0x2000 length 0x2000 00:18:21.180 nvme1n1 : 5.88 127.85 7.99 0.00 0.00 876515.58 69367.34 1322818.95 00:18:21.180 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:21.180 Verification LBA range: start 0x0 length 0xbd0b 00:18:21.180 nvme2n1 : 5.90 184.86 11.55 0.00 0.00 595403.12 6805.66 890483.00 00:18:21.180 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:21.180 Verification LBA range: start 0xbd0b length 0xbd0b 00:18:21.180 nvme2n1 : 5.92 132.39 8.27 0.00 0.00 826978.92 2886.10 1406705.03 00:18:21.180 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:21.180 Verification LBA range: start 0x0 length 0xa000 00:18:21.180 nvme3n1 : 5.90 138.32 8.64 0.00 0.00 771476.28 1493.46 1206669.00 00:18:21.180 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:21.180 Verification LBA range: start 0xa000 length 0xa000 00:18:21.180 nvme3n1 : 5.93 159.11 9.94 0.00 0.00 666964.79 620.70 896935.78 00:18:21.180 [2024-12-05T19:36:48.435Z] =================================================================================================================== 00:18:21.180 [2024-12-05T19:36:48.435Z] Total : 1524.52 95.28 0.00 0.00 903442.52 620.70 2310093.59 00:18:22.125 00:18:22.125 real 0m7.833s 00:18:22.125 user 0m14.319s 00:18:22.125 sys 0m0.459s 00:18:22.125 ************************************ 00:18:22.125 END TEST bdev_verify_big_io 00:18:22.125 19:36:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.125 19:36:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:22.125 ************************************ 00:18:22.125 19:36:49 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:22.125 19:36:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:22.125 19:36:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.125 19:36:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:22.125 ************************************ 00:18:22.125 START TEST bdev_write_zeroes 00:18:22.125 ************************************ 00:18:22.125 19:36:49 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:22.125 [2024-12-05 19:36:49.346554] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:18:22.125 [2024-12-05 19:36:49.346722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73547 ] 00:18:22.387 [2024-12-05 19:36:49.511550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.387 [2024-12-05 19:36:49.632456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.959 Running I/O for 1 seconds... 00:18:23.905 68320.00 IOPS, 266.88 MiB/s 00:18:23.905 Latency(us) 00:18:23.905 [2024-12-05T19:36:51.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.905 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:23.905 nvme0n1 : 1.01 11225.89 43.85 0.00 0.00 11390.41 8368.44 23996.26 00:18:23.905 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:23.905 nvme0n2 : 1.02 11211.64 43.80 0.00 0.00 11395.00 8418.86 22584.71 00:18:23.905 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:23.905 nvme0n3 : 1.02 11196.81 43.74 0.00 0.00 11400.16 8418.86 23088.84 00:18:23.905 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:23.905 nvme1n1 : 1.02 11182.63 43.68 0.00 0.00 11400.54 8469.27 23592.96 00:18:23.905 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:23.905 nvme2n1 : 1.03 11941.17 46.65 0.00 0.00 10665.89 2999.53 21576.47 00:18:23.905 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:23.905 nvme3n1 : 1.03 11210.76 43.79 0.00 0.00 11326.27 7763.50 24399.56 00:18:23.905 [2024-12-05T19:36:51.160Z] =================================================================================================================== 00:18:23.905 [2024-12-05T19:36:51.160Z] Total : 67968.90 265.50 0.00 0.00 11255.72 2999.53 24399.56 00:18:24.849 00:18:24.849 real 0m2.619s 00:18:24.849 user 0m1.921s 00:18:24.849 sys 0m0.498s 00:18:24.849 ************************************ 00:18:24.849 END TEST bdev_write_zeroes 00:18:24.849 ************************************ 00:18:24.849 19:36:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.849 19:36:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:24.849 19:36:51 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:24.849 19:36:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:24.849 19:36:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.849 19:36:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:24.849 ************************************ 00:18:24.849 START TEST bdev_json_nonenclosed 00:18:24.849 ************************************ 00:18:24.849 19:36:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:24.849 [2024-12-05 19:36:52.032794] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:18:24.849 [2024-12-05 19:36:52.032953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73599 ] 00:18:25.110 [2024-12-05 19:36:52.196389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.110 [2024-12-05 19:36:52.315013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.110 [2024-12-05 19:36:52.315110] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:25.110 [2024-12-05 19:36:52.315129] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:25.110 [2024-12-05 19:36:52.315140] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:25.371 00:18:25.371 real 0m0.544s 00:18:25.371 user 0m0.334s 00:18:25.371 sys 0m0.104s 00:18:25.371 ************************************ 00:18:25.371 END TEST bdev_json_nonenclosed 00:18:25.371 ************************************ 00:18:25.371 19:36:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.371 19:36:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:25.371 19:36:52 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:25.371 19:36:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:25.371 19:36:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.371 19:36:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:25.371 ************************************ 00:18:25.371 START TEST bdev_json_nonarray 00:18:25.371 ************************************ 00:18:25.371 19:36:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:25.631 [2024-12-05 19:36:52.644814] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:18:25.631 [2024-12-05 19:36:52.644974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73620 ] 00:18:25.631 [2024-12-05 19:36:52.811214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.893 [2024-12-05 19:36:52.935391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.893 [2024-12-05 19:36:52.935504] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:25.893 [2024-12-05 19:36:52.935525] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:25.893 [2024-12-05 19:36:52.935536] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:25.893 00:18:25.893 real 0m0.557s 00:18:25.893 user 0m0.335s 00:18:25.893 sys 0m0.115s 00:18:25.893 ************************************ 00:18:25.893 END TEST bdev_json_nonarray 00:18:25.893 ************************************ 00:18:25.893 19:36:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.893 19:36:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:26.154 19:36:53 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:18:26.154 19:36:53 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:18:26.154 19:36:53 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:18:26.154 19:36:53 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:18:26.154 19:36:53 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:18:26.154 19:36:53 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:26.154 19:36:53 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:26.154 19:36:53 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:18:26.154 19:36:53 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:18:26.154 19:36:53 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:18:26.154 19:36:53 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:18:26.154 19:36:53 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:26.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:53.305 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:53.305 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:55.238 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:55.238 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:55.238 00:18:55.238 real 1m17.425s 00:18:55.238 user 1m22.853s 00:18:55.238 sys 1m6.413s 00:18:55.238 19:37:22 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.238 ************************************ 00:18:55.238 END TEST blockdev_xnvme 00:18:55.238 ************************************ 00:18:55.238 19:37:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:55.238 19:37:22 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:55.238 19:37:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:55.238 19:37:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.238 19:37:22 -- common/autotest_common.sh@10 -- # set +x 00:18:55.238 ************************************ 00:18:55.238 START TEST ublk 00:18:55.238 ************************************ 00:18:55.238 19:37:22 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:55.238 * Looking for test storage... 00:18:55.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:55.238 19:37:22 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:55.238 19:37:22 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:18:55.238 19:37:22 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:55.238 19:37:22 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:55.238 19:37:22 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.238 19:37:22 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.238 19:37:22 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.238 19:37:22 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.238 19:37:22 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.238 19:37:22 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.238 19:37:22 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.238 19:37:22 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.238 19:37:22 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.238 19:37:22 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.238 19:37:22 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.238 19:37:22 ublk -- scripts/common.sh@344 -- # case "$op" in 00:18:55.238 19:37:22 ublk -- scripts/common.sh@345 -- # : 1 00:18:55.238 19:37:22 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.238 19:37:22 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.238 19:37:22 ublk -- scripts/common.sh@365 -- # decimal 1 00:18:55.238 19:37:22 ublk -- scripts/common.sh@353 -- # local d=1 00:18:55.238 19:37:22 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.238 19:37:22 ublk -- scripts/common.sh@355 -- # echo 1 00:18:55.239 19:37:22 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.239 19:37:22 ublk -- scripts/common.sh@366 -- # decimal 2 00:18:55.239 19:37:22 ublk -- scripts/common.sh@353 -- # local d=2 00:18:55.239 19:37:22 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.239 19:37:22 ublk -- scripts/common.sh@355 -- # echo 2 00:18:55.239 19:37:22 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.239 19:37:22 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.239 19:37:22 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.239 19:37:22 ublk -- scripts/common.sh@368 -- # return 0 00:18:55.239 19:37:22 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.239 19:37:22 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:55.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.239 --rc genhtml_branch_coverage=1 00:18:55.239 --rc genhtml_function_coverage=1 00:18:55.239 --rc genhtml_legend=1 00:18:55.239 --rc geninfo_all_blocks=1 00:18:55.239 --rc geninfo_unexecuted_blocks=1 00:18:55.239 00:18:55.239 ' 00:18:55.239 19:37:22 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:55.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.239 --rc genhtml_branch_coverage=1 00:18:55.239 --rc genhtml_function_coverage=1 00:18:55.239 --rc genhtml_legend=1 00:18:55.239 --rc geninfo_all_blocks=1 00:18:55.239 --rc geninfo_unexecuted_blocks=1 00:18:55.239 00:18:55.239 ' 00:18:55.239 19:37:22 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:55.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.239 --rc genhtml_branch_coverage=1 00:18:55.239 --rc genhtml_function_coverage=1 00:18:55.239 --rc genhtml_legend=1 00:18:55.239 --rc geninfo_all_blocks=1 00:18:55.239 --rc geninfo_unexecuted_blocks=1 00:18:55.239 00:18:55.239 ' 00:18:55.239 19:37:22 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:55.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.239 --rc genhtml_branch_coverage=1 00:18:55.239 --rc genhtml_function_coverage=1 00:18:55.239 --rc genhtml_legend=1 00:18:55.239 --rc geninfo_all_blocks=1 00:18:55.239 --rc geninfo_unexecuted_blocks=1 00:18:55.239 00:18:55.239 ' 00:18:55.239 19:37:22 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:55.239 19:37:22 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:55.239 19:37:22 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:55.239 19:37:22 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:55.239 19:37:22 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:55.239 19:37:22 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:55.239 19:37:22 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:55.239 19:37:22 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:55.239 19:37:22 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:55.239 19:37:22 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:55.239 19:37:22 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:55.239 19:37:22 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:55.239 19:37:22 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:55.239 19:37:22 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:55.239 19:37:22 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:55.239 19:37:22 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:55.239 19:37:22 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:55.239 19:37:22 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:55.239 19:37:22 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:55.239 19:37:22 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:55.239 19:37:22 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:55.239 19:37:22 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.239 19:37:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:55.239 ************************************ 00:18:55.239 START TEST test_save_ublk_config 00:18:55.239 ************************************ 00:18:55.239 19:37:22 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:18:55.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.239 19:37:22 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:55.239 19:37:22 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73928 00:18:55.239 19:37:22 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:55.239 19:37:22 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73928 00:18:55.239 19:37:22 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:55.239 19:37:22 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73928 ']' 00:18:55.239 19:37:22 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.239 19:37:22 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.239 19:37:22 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.239 19:37:22 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.239 19:37:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:55.239 [2024-12-05 19:37:22.448309] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:18:55.239 [2024-12-05 19:37:22.448707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73928 ] 00:18:55.500 [2024-12-05 19:37:22.613782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.500 [2024-12-05 19:37:22.739110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.441 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.441 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:56.441 19:37:23 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:56.441 19:37:23 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:56.441 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.441 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:56.441 [2024-12-05 19:37:23.475697] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:56.441 [2024-12-05 19:37:23.476599] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:56.441 malloc0 00:18:56.441 [2024-12-05 19:37:23.547832] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:56.441 [2024-12-05 19:37:23.547937] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:56.441 [2024-12-05 19:37:23.547950] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:56.441 [2024-12-05 19:37:23.547958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:56.441 [2024-12-05 19:37:23.556799] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:56.441 [2024-12-05 19:37:23.556830] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:56.441 [2024-12-05 19:37:23.563709] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:56.441 [2024-12-05 19:37:23.563839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:56.441 [2024-12-05 19:37:23.580712] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:56.441 0 00:18:56.441 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.441 19:37:23 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:56.441 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.441 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:56.702 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.702 19:37:23 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:56.702 "subsystems": [ 00:18:56.702 { 00:18:56.702 "subsystem": "fsdev", 00:18:56.702 "config": [ 00:18:56.702 { 00:18:56.702 "method": "fsdev_set_opts", 00:18:56.702 "params": { 00:18:56.702 "fsdev_io_pool_size": 65535, 00:18:56.702 "fsdev_io_cache_size": 256 00:18:56.702 } 00:18:56.702 } 00:18:56.702 ] 00:18:56.702 }, 00:18:56.702 { 00:18:56.702 "subsystem": "keyring", 00:18:56.702 "config": [] 00:18:56.702 }, 00:18:56.702 { 00:18:56.702 "subsystem": "iobuf", 00:18:56.702 "config": [ 00:18:56.702 { 00:18:56.702 "method": "iobuf_set_options", 00:18:56.702 "params": { 00:18:56.702 "small_pool_count": 8192, 00:18:56.702 "large_pool_count": 1024, 00:18:56.702 "small_bufsize": 8192, 00:18:56.702 "large_bufsize": 135168, 00:18:56.702 "enable_numa": false 00:18:56.702 } 00:18:56.703 } 00:18:56.703 ] 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "subsystem": "sock", 00:18:56.703 "config": [ 00:18:56.703 { 00:18:56.703 "method": "sock_set_default_impl", 00:18:56.703 "params": { 00:18:56.703 "impl_name": "posix" 00:18:56.703 } 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "method": "sock_impl_set_options", 00:18:56.703 "params": { 00:18:56.703 "impl_name": "ssl", 00:18:56.703 "recv_buf_size": 4096, 00:18:56.703 "send_buf_size": 4096, 00:18:56.703 "enable_recv_pipe": true, 00:18:56.703 "enable_quickack": false, 00:18:56.703 "enable_placement_id": 0, 00:18:56.703 "enable_zerocopy_send_server": true, 00:18:56.703 "enable_zerocopy_send_client": false, 00:18:56.703 "zerocopy_threshold": 0, 00:18:56.703 "tls_version": 0, 00:18:56.703 "enable_ktls": false 00:18:56.703 } 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "method": "sock_impl_set_options", 00:18:56.703 "params": { 00:18:56.703 "impl_name": "posix", 00:18:56.703 "recv_buf_size": 2097152, 00:18:56.703 "send_buf_size": 2097152, 00:18:56.703 "enable_recv_pipe": true, 00:18:56.703 "enable_quickack": false, 00:18:56.703 "enable_placement_id": 0, 00:18:56.703 "enable_zerocopy_send_server": true, 00:18:56.703 "enable_zerocopy_send_client": false, 00:18:56.703 "zerocopy_threshold": 0, 00:18:56.703 "tls_version": 0, 00:18:56.703 "enable_ktls": false 00:18:56.703 } 00:18:56.703 } 00:18:56.703 ] 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "subsystem": "vmd", 00:18:56.703 "config": [] 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "subsystem": "accel", 00:18:56.703 "config": [ 00:18:56.703 { 00:18:56.703 "method": "accel_set_options", 00:18:56.703 "params": { 00:18:56.703 "small_cache_size": 128, 00:18:56.703 "large_cache_size": 16, 00:18:56.703 "task_count": 2048, 00:18:56.703 "sequence_count": 2048, 00:18:56.703 "buf_count": 2048 00:18:56.703 } 00:18:56.703 } 00:18:56.703 ] 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "subsystem": "bdev", 00:18:56.703 "config": [ 00:18:56.703 { 00:18:56.703 "method": "bdev_set_options", 00:18:56.703 "params": { 00:18:56.703 "bdev_io_pool_size": 65535, 00:18:56.703 "bdev_io_cache_size": 256, 00:18:56.703 "bdev_auto_examine": true, 00:18:56.703 "iobuf_small_cache_size": 128, 00:18:56.703 "iobuf_large_cache_size": 16 00:18:56.703 } 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "method": "bdev_raid_set_options", 00:18:56.703 "params": { 00:18:56.703 "process_window_size_kb": 1024, 00:18:56.703 "process_max_bandwidth_mb_sec": 0 00:18:56.703 } 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "method": "bdev_iscsi_set_options", 00:18:56.703 "params": { 00:18:56.703 "timeout_sec": 30 00:18:56.703 } 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "method": "bdev_nvme_set_options", 00:18:56.703 "params": { 00:18:56.703 "action_on_timeout": "none", 00:18:56.703 "timeout_us": 0, 00:18:56.703 "timeout_admin_us": 0, 00:18:56.703 "keep_alive_timeout_ms": 10000, 00:18:56.703 "arbitration_burst": 0, 00:18:56.703 "low_priority_weight": 0, 00:18:56.703 "medium_priority_weight": 0, 00:18:56.703 "high_priority_weight": 0, 00:18:56.703 "nvme_adminq_poll_period_us": 10000, 00:18:56.703 "nvme_ioq_poll_period_us": 0, 00:18:56.703 "io_queue_requests": 0, 00:18:56.703 "delay_cmd_submit": true, 00:18:56.703 "transport_retry_count": 4, 00:18:56.703 "bdev_retry_count": 3, 00:18:56.703 "transport_ack_timeout": 0, 00:18:56.703 "ctrlr_loss_timeout_sec": 0, 00:18:56.703 "reconnect_delay_sec": 0, 00:18:56.703 "fast_io_fail_timeout_sec": 0, 00:18:56.703 "disable_auto_failback": false, 00:18:56.703 "generate_uuids": false, 00:18:56.703 "transport_tos": 0, 00:18:56.703 "nvme_error_stat": false, 00:18:56.703 "rdma_srq_size": 0, 00:18:56.703 "io_path_stat": false, 00:18:56.703 "allow_accel_sequence": false, 00:18:56.703 "rdma_max_cq_size": 0, 00:18:56.703 "rdma_cm_event_timeout_ms": 0, 00:18:56.703 "dhchap_digests": [ 00:18:56.703 "sha256", 00:18:56.703 "sha384", 00:18:56.703 "sha512" 00:18:56.703 ], 00:18:56.703 "dhchap_dhgroups": [ 00:18:56.703 "null", 00:18:56.703 "ffdhe2048", 00:18:56.703 "ffdhe3072", 00:18:56.703 "ffdhe4096", 00:18:56.703 "ffdhe6144", 00:18:56.703 "ffdhe8192" 00:18:56.703 ] 00:18:56.703 } 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "method": "bdev_nvme_set_hotplug", 00:18:56.703 "params": { 00:18:56.703 "period_us": 100000, 00:18:56.703 "enable": false 00:18:56.703 } 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "method": "bdev_malloc_create", 00:18:56.703 "params": { 00:18:56.703 "name": "malloc0", 00:18:56.703 "num_blocks": 8192, 00:18:56.703 "block_size": 4096, 00:18:56.703 "physical_block_size": 4096, 00:18:56.703 "uuid": "0e4ee645-7624-4269-9e66-d506ac85a179", 00:18:56.703 "optimal_io_boundary": 0, 00:18:56.703 "md_size": 0, 00:18:56.703 "dif_type": 0, 00:18:56.703 "dif_is_head_of_md": false, 00:18:56.703 "dif_pi_format": 0 00:18:56.703 } 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "method": "bdev_wait_for_examine" 00:18:56.703 } 00:18:56.703 ] 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "subsystem": "scsi", 00:18:56.703 "config": null 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "subsystem": "scheduler", 00:18:56.703 "config": [ 00:18:56.703 { 00:18:56.703 "method": "framework_set_scheduler", 00:18:56.703 "params": { 00:18:56.703 "name": "static" 00:18:56.703 } 00:18:56.703 } 00:18:56.703 ] 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "subsystem": "vhost_scsi", 00:18:56.703 "config": [] 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "subsystem": "vhost_blk", 00:18:56.703 "config": [] 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "subsystem": "ublk", 00:18:56.703 "config": [ 00:18:56.703 { 00:18:56.703 "method": "ublk_create_target", 00:18:56.703 "params": { 00:18:56.703 "cpumask": "1" 00:18:56.703 } 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "method": "ublk_start_disk", 00:18:56.703 "params": { 00:18:56.703 "bdev_name": "malloc0", 00:18:56.703 "ublk_id": 0, 00:18:56.703 "num_queues": 1, 00:18:56.703 "queue_depth": 128 00:18:56.703 } 00:18:56.703 } 00:18:56.703 ] 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "subsystem": "nbd", 00:18:56.703 "config": [] 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "subsystem": "nvmf", 00:18:56.703 "config": [ 00:18:56.703 { 00:18:56.703 "method": "nvmf_set_config", 00:18:56.703 "params": { 00:18:56.703 "discovery_filter": "match_any", 00:18:56.703 "admin_cmd_passthru": { 00:18:56.703 "identify_ctrlr": false 00:18:56.703 }, 00:18:56.703 "dhchap_digests": [ 00:18:56.703 "sha256", 00:18:56.703 "sha384", 00:18:56.703 "sha512" 00:18:56.703 ], 00:18:56.703 "dhchap_dhgroups": [ 00:18:56.703 "null", 00:18:56.703 "ffdhe2048", 00:18:56.703 "ffdhe3072", 00:18:56.703 "ffdhe4096", 00:18:56.703 "ffdhe6144", 00:18:56.703 "ffdhe8192" 00:18:56.703 ] 00:18:56.703 } 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "method": "nvmf_set_max_subsystems", 00:18:56.703 "params": { 00:18:56.703 "max_subsystems": 1024 00:18:56.703 } 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "method": "nvmf_set_crdt", 00:18:56.703 "params": { 00:18:56.703 "crdt1": 0, 00:18:56.703 "crdt2": 0, 00:18:56.703 "crdt3": 0 00:18:56.703 } 00:18:56.703 } 00:18:56.703 ] 00:18:56.703 }, 00:18:56.703 { 00:18:56.703 "subsystem": "iscsi", 00:18:56.703 "config": [ 00:18:56.703 { 00:18:56.703 "method": "iscsi_set_options", 00:18:56.703 "params": { 00:18:56.703 "node_base": "iqn.2016-06.io.spdk", 00:18:56.703 "max_sessions": 128, 00:18:56.703 "max_connections_per_session": 2, 00:18:56.703 "max_queue_depth": 64, 00:18:56.703 "default_time2wait": 2, 00:18:56.703 "default_time2retain": 20, 00:18:56.703 "first_burst_length": 8192, 00:18:56.703 "immediate_data": true, 00:18:56.703 "allow_duplicated_isid": false, 00:18:56.703 "error_recovery_level": 0, 00:18:56.703 "nop_timeout": 60, 00:18:56.703 "nop_in_interval": 30, 00:18:56.703 "disable_chap": false, 00:18:56.703 "require_chap": false, 00:18:56.703 "mutual_chap": false, 00:18:56.703 "chap_group": 0, 00:18:56.703 "max_large_datain_per_connection": 64, 00:18:56.703 "max_r2t_per_connection": 4, 00:18:56.703 "pdu_pool_size": 36864, 00:18:56.703 "immediate_data_pool_size": 16384, 00:18:56.703 "data_out_pool_size": 2048 00:18:56.703 } 00:18:56.703 } 00:18:56.703 ] 00:18:56.703 } 00:18:56.703 ] 00:18:56.703 }' 00:18:56.703 19:37:23 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73928 00:18:56.703 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73928 ']' 00:18:56.703 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73928 00:18:56.704 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:56.704 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.704 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73928 00:18:56.704 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.704 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.704 killing process with pid 73928 00:18:56.704 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73928' 00:18:56.704 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73928 00:18:56.704 19:37:23 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73928 00:18:58.088 [2024-12-05 19:37:25.013237] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:58.088 [2024-12-05 19:37:25.047820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:58.088 [2024-12-05 19:37:25.047973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:58.088 [2024-12-05 19:37:25.055714] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:58.088 [2024-12-05 19:37:25.055775] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:58.088 [2024-12-05 19:37:25.055790] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:58.088 [2024-12-05 19:37:25.055825] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:58.088 [2024-12-05 19:37:25.055998] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:59.468 19:37:26 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73988 00:18:59.468 19:37:26 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73988 00:18:59.468 19:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73988 ']' 00:18:59.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.468 19:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.468 19:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.469 19:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.469 19:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.469 19:37:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:59.469 19:37:26 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:59.469 "subsystems": [ 00:18:59.469 { 00:18:59.469 "subsystem": "fsdev", 00:18:59.469 "config": [ 00:18:59.469 { 00:18:59.469 "method": "fsdev_set_opts", 00:18:59.469 "params": { 00:18:59.469 "fsdev_io_pool_size": 65535, 00:18:59.469 "fsdev_io_cache_size": 256 00:18:59.469 } 00:18:59.469 } 00:18:59.469 ] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "keyring", 00:18:59.469 "config": [] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "iobuf", 00:18:59.469 "config": [ 00:18:59.469 { 00:18:59.469 "method": "iobuf_set_options", 00:18:59.469 "params": { 00:18:59.469 "small_pool_count": 8192, 00:18:59.469 "large_pool_count": 1024, 00:18:59.469 "small_bufsize": 8192, 00:18:59.469 "large_bufsize": 135168, 00:18:59.469 "enable_numa": false 00:18:59.469 } 00:18:59.469 } 00:18:59.469 ] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "sock", 00:18:59.469 "config": [ 00:18:59.469 { 00:18:59.469 "method": "sock_set_default_impl", 00:18:59.469 "params": { 00:18:59.469 "impl_name": "posix" 00:18:59.469 } 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "method": "sock_impl_set_options", 00:18:59.469 "params": { 00:18:59.469 "impl_name": "ssl", 00:18:59.469 "recv_buf_size": 4096, 00:18:59.469 "send_buf_size": 4096, 00:18:59.469 "enable_recv_pipe": true, 00:18:59.469 "enable_quickack": false, 00:18:59.469 "enable_placement_id": 0, 00:18:59.469 "enable_zerocopy_send_server": true, 00:18:59.469 "enable_zerocopy_send_client": false, 00:18:59.469 "zerocopy_threshold": 0, 00:18:59.469 "tls_version": 0, 00:18:59.469 "enable_ktls": false 00:18:59.469 } 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "method": "sock_impl_set_options", 00:18:59.469 "params": { 00:18:59.469 "impl_name": "posix", 00:18:59.469 "recv_buf_size": 2097152, 00:18:59.469 "send_buf_size": 2097152, 00:18:59.469 "enable_recv_pipe": true, 00:18:59.469 "enable_quickack": false, 00:18:59.469 "enable_placement_id": 0, 00:18:59.469 "enable_zerocopy_send_server": true, 00:18:59.469 "enable_zerocopy_send_client": false, 00:18:59.469 "zerocopy_threshold": 0, 00:18:59.469 "tls_version": 0, 00:18:59.469 "enable_ktls": false 00:18:59.469 } 00:18:59.469 } 00:18:59.469 ] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "vmd", 00:18:59.469 "config": [] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "accel", 00:18:59.469 "config": [ 00:18:59.469 { 00:18:59.469 "method": "accel_set_options", 00:18:59.469 "params": { 00:18:59.469 "small_cache_size": 128, 00:18:59.469 "large_cache_size": 16, 00:18:59.469 "task_count": 2048, 00:18:59.469 "sequence_count": 2048, 00:18:59.469 "buf_count": 2048 00:18:59.469 } 00:18:59.469 } 00:18:59.469 ] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "bdev", 00:18:59.469 "config": [ 00:18:59.469 { 00:18:59.469 "method": "bdev_set_options", 00:18:59.469 "params": { 00:18:59.469 "bdev_io_pool_size": 65535, 00:18:59.469 "bdev_io_cache_size": 256, 00:18:59.469 "bdev_auto_examine": true, 00:18:59.469 "iobuf_small_cache_size": 128, 00:18:59.469 "iobuf_large_cache_size": 16 00:18:59.469 } 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "method": "bdev_raid_set_options", 00:18:59.469 "params": { 00:18:59.469 "process_window_size_kb": 1024, 00:18:59.469 "process_max_bandwidth_mb_sec": 0 00:18:59.469 } 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "method": "bdev_iscsi_set_options", 00:18:59.469 "params": { 00:18:59.469 "timeout_sec": 30 00:18:59.469 } 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "method": "bdev_nvme_set_options", 00:18:59.469 "params": { 00:18:59.469 "action_on_timeout": "none", 00:18:59.469 "timeout_us": 0, 00:18:59.469 "timeout_admin_us": 0, 00:18:59.469 "keep_alive_timeout_ms": 10000, 00:18:59.469 "arbitration_burst": 0, 00:18:59.469 "low_priority_weight": 0, 00:18:59.469 "medium_priority_weight": 0, 00:18:59.469 "high_priority_weight": 0, 00:18:59.469 "nvme_adminq_poll_period_us": 10000, 00:18:59.469 "nvme_ioq_poll_period_us": 0, 00:18:59.469 "io_queue_requests": 0, 00:18:59.469 "delay_cmd_submit": true, 00:18:59.469 "transport_retry_count": 4, 00:18:59.469 "bdev_retry_count": 3, 00:18:59.469 "transport_ack_timeout": 0, 00:18:59.469 "ctrlr_loss_timeout_sec": 0, 00:18:59.469 "reconnect_delay_sec": 0, 00:18:59.469 "fast_io_fail_timeout_sec": 0, 00:18:59.469 "disable_auto_failback": false, 00:18:59.469 "generate_uuids": false, 00:18:59.469 "transport_tos": 0, 00:18:59.469 "nvme_error_stat": false, 00:18:59.469 "rdma_srq_size": 0, 00:18:59.469 "io_path_stat": false, 00:18:59.469 "allow_accel_sequence": false, 00:18:59.469 "rdma_max_cq_size": 0, 00:18:59.469 "rdma_cm_event_timeout_ms": 0, 00:18:59.469 "dhchap_digests": [ 00:18:59.469 "sha256", 00:18:59.469 "sha384", 00:18:59.469 "sha512" 00:18:59.469 ], 00:18:59.469 "dhchap_dhgroups": [ 00:18:59.469 "null", 00:18:59.469 "ffdhe2048", 00:18:59.469 "ffdhe3072", 00:18:59.469 "ffdhe4096", 00:18:59.469 "ffdhe6144", 00:18:59.469 "ffdhe8192" 00:18:59.469 ] 00:18:59.469 } 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "method": "bdev_nvme_set_hotplug", 00:18:59.469 "params": { 00:18:59.469 "period_us": 100000, 00:18:59.469 "enable": false 00:18:59.469 } 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "method": "bdev_malloc_create", 00:18:59.469 "params": { 00:18:59.469 "name": "malloc0", 00:18:59.469 "num_blocks": 8192, 00:18:59.469 "block_size": 4096, 00:18:59.469 "physical_block_size": 4096, 00:18:59.469 "uuid": "0e4ee645-7624-4269-9e66-d506ac85a179", 00:18:59.469 "optimal_io_boundary": 0, 00:18:59.469 "md_size": 0, 00:18:59.469 "dif_type": 0, 00:18:59.469 "dif_is_head_of_md": false, 00:18:59.469 "dif_pi_format": 0 00:18:59.469 } 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "method": "bdev_wait_for_examine" 00:18:59.469 } 00:18:59.469 ] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "scsi", 00:18:59.469 "config": null 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "scheduler", 00:18:59.469 "config": [ 00:18:59.469 { 00:18:59.469 "method": "framework_set_scheduler", 00:18:59.469 "params": { 00:18:59.469 "name": "static" 00:18:59.469 } 00:18:59.469 } 00:18:59.469 ] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "vhost_scsi", 00:18:59.469 "config": [] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "vhost_blk", 00:18:59.469 "config": [] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "ublk", 00:18:59.469 "config": [ 00:18:59.469 { 00:18:59.469 "method": "ublk_create_target", 00:18:59.469 "params": { 00:18:59.469 "cpumask": "1" 00:18:59.469 } 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "method": "ublk_start_disk", 00:18:59.469 "params": { 00:18:59.469 "bdev_name": "malloc0", 00:18:59.469 "ublk_id": 0, 00:18:59.469 "num_queues": 1, 00:18:59.469 "queue_depth": 128 00:18:59.469 } 00:18:59.469 } 00:18:59.469 ] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "nbd", 00:18:59.469 "config": [] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "nvmf", 00:18:59.469 "config": [ 00:18:59.469 { 00:18:59.469 "method": "nvmf_set_config", 00:18:59.469 "params": { 00:18:59.469 "discovery_filter": "match_any", 00:18:59.469 "admin_cmd_passthru": { 00:18:59.469 "identify_ctrlr": false 00:18:59.469 }, 00:18:59.469 "dhchap_digests": [ 00:18:59.469 "sha256", 00:18:59.469 "sha384", 00:18:59.469 "sha512" 00:18:59.469 ], 00:18:59.469 "dhchap_dhgroups": [ 00:18:59.469 "null", 00:18:59.469 "ffdhe2048", 00:18:59.469 "ffdhe3072", 00:18:59.469 "ffdhe4096", 00:18:59.469 "ffdhe6144", 00:18:59.469 "ffdhe8192" 00:18:59.469 ] 00:18:59.469 } 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "method": "nvmf_set_max_subsystems", 00:18:59.469 "params": { 00:18:59.469 "max_subsystems": 1024 00:18:59.469 } 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "method": "nvmf_set_crdt", 00:18:59.469 "params": { 00:18:59.469 "crdt1": 0, 00:18:59.469 "crdt2": 0, 00:18:59.469 "crdt3": 0 00:18:59.469 } 00:18:59.469 } 00:18:59.469 ] 00:18:59.469 }, 00:18:59.469 { 00:18:59.469 "subsystem": "iscsi", 00:18:59.469 "config": [ 00:18:59.469 { 00:18:59.469 "method": "iscsi_set_options", 00:18:59.469 "params": { 00:18:59.469 "node_base": "iqn.2016-06.io.spdk", 00:18:59.469 "max_sessions": 128, 00:18:59.469 "max_connections_per_session": 2, 00:18:59.469 "max_queue_depth": 64, 00:18:59.470 "default_time2wait": 2, 00:18:59.470 "default_time2retain": 20, 00:18:59.470 "first_burst_length": 8192, 00:18:59.470 "immediate_data": true, 00:18:59.470 "allow_duplicated_isid": false, 00:18:59.470 "error_recovery_level": 0, 00:18:59.470 "nop_timeout": 60, 00:18:59.470 "nop_in_interval": 30, 00:18:59.470 "disable_chap": false, 00:18:59.470 "require_chap": false, 00:18:59.470 "mutual_chap": false, 00:18:59.470 "chap_group": 0, 00:18:59.470 "max_large_datain_per_connection": 64, 00:18:59.470 "max_r2t_per_connection": 4, 00:18:59.470 "pdu_pool_size": 36864, 00:18:59.470 "immediate_data_pool_size": 16384, 00:18:59.470 "data_out_pool_size": 2048 00:18:59.470 } 00:18:59.470 } 00:18:59.470 ] 00:18:59.470 } 00:18:59.470 ] 00:18:59.470 }' 00:18:59.470 19:37:26 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:59.470 [2024-12-05 19:37:26.546933] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:18:59.470 [2024-12-05 19:37:26.547090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73988 ] 00:18:59.470 [2024-12-05 19:37:26.702096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.727 [2024-12-05 19:37:26.789833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.294 [2024-12-05 19:37:27.431684] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:00.294 [2024-12-05 19:37:27.432319] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:00.294 [2024-12-05 19:37:27.439771] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:00.294 [2024-12-05 19:37:27.439831] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:00.294 [2024-12-05 19:37:27.439839] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:00.294 [2024-12-05 19:37:27.439844] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:00.294 [2024-12-05 19:37:27.448740] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:00.294 [2024-12-05 19:37:27.448760] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:00.294 [2024-12-05 19:37:27.455691] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:00.294 [2024-12-05 19:37:27.455763] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:00.294 [2024-12-05 19:37:27.472686] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:00.294 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.294 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:19:00.294 19:37:27 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:19:00.294 19:37:27 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:19:00.294 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.294 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:00.294 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73988 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73988 ']' 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73988 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73988 00:19:00.563 killing process with pid 73988 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73988' 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73988 00:19:00.563 19:37:27 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73988 00:19:01.499 [2024-12-05 19:37:28.644198] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:01.499 [2024-12-05 19:37:28.688697] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:01.499 [2024-12-05 19:37:28.688802] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:01.499 [2024-12-05 19:37:28.696692] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:01.499 [2024-12-05 19:37:28.696733] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:01.499 [2024-12-05 19:37:28.696739] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:01.499 [2024-12-05 19:37:28.696759] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:01.499 [2024-12-05 19:37:28.696866] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:02.877 19:37:29 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:19:02.877 00:19:02.877 real 0m7.515s 00:19:02.877 user 0m5.186s 00:19:02.877 sys 0m2.971s 00:19:02.877 19:37:29 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.877 ************************************ 00:19:02.877 END TEST test_save_ublk_config 00:19:02.877 ************************************ 00:19:02.877 19:37:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:02.877 19:37:29 ublk -- ublk/ublk.sh@139 -- # spdk_pid=74057 00:19:02.877 19:37:29 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.877 19:37:29 ublk -- ublk/ublk.sh@141 -- # waitforlisten 74057 00:19:02.877 19:37:29 ublk -- common/autotest_common.sh@835 -- # '[' -z 74057 ']' 00:19:02.877 19:37:29 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.877 19:37:29 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:02.877 19:37:29 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.877 19:37:29 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.877 19:37:29 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.877 19:37:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:02.877 [2024-12-05 19:37:29.981722] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:19:02.877 [2024-12-05 19:37:29.981845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74057 ] 00:19:03.139 [2024-12-05 19:37:30.138399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:03.139 [2024-12-05 19:37:30.268772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.139 [2024-12-05 19:37:30.268783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.709 19:37:30 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.970 19:37:30 ublk -- common/autotest_common.sh@868 -- # return 0 00:19:03.970 19:37:30 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:19:03.970 19:37:30 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:03.970 19:37:30 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.970 19:37:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:03.970 ************************************ 00:19:03.970 START TEST test_create_ublk 00:19:03.970 ************************************ 00:19:03.970 19:37:30 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:19:03.970 19:37:30 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:19:03.970 19:37:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.970 19:37:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:03.970 [2024-12-05 19:37:30.986703] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:03.970 [2024-12-05 19:37:30.989030] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:03.970 19:37:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.970 19:37:30 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:19:03.970 19:37:30 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:19:03.970 19:37:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.970 19:37:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:03.970 19:37:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.970 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:04.231 19:37:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.231 19:37:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:04.231 [2024-12-05 19:37:31.235879] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:04.231 [2024-12-05 19:37:31.236316] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:04.231 [2024-12-05 19:37:31.236335] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:04.231 [2024-12-05 19:37:31.236343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:04.231 [2024-12-05 19:37:31.243728] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:04.231 [2024-12-05 19:37:31.243757] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:04.231 [2024-12-05 19:37:31.251707] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:04.231 [2024-12-05 19:37:31.252409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:04.231 [2024-12-05 19:37:31.275721] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:04.231 19:37:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:19:04.231 19:37:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.231 19:37:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:04.231 19:37:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:19:04.231 { 00:19:04.231 "ublk_device": "/dev/ublkb0", 00:19:04.231 "id": 0, 00:19:04.231 "queue_depth": 512, 00:19:04.231 "num_queues": 4, 00:19:04.231 "bdev_name": "Malloc0" 00:19:04.231 } 00:19:04.231 ]' 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:04.231 19:37:31 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:19:04.231 19:37:31 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:19:04.231 19:37:31 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:19:04.231 19:37:31 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:19:04.231 19:37:31 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:19:04.231 19:37:31 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:19:04.231 19:37:31 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:19:04.231 19:37:31 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:19:04.231 19:37:31 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:19:04.231 19:37:31 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:04.231 19:37:31 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:04.231 19:37:31 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:19:04.493 fio: verification read phase will never start because write phase uses all of runtime 00:19:04.493 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:19:04.493 fio-3.35 00:19:04.493 Starting 1 process 00:19:14.483 00:19:14.483 fio_test: (groupid=0, jobs=1): err= 0: pid=74102: Thu Dec 5 19:37:41 2024 00:19:14.483 write: IOPS=20.3k, BW=79.4MiB/s (83.2MB/s)(794MiB/10001msec); 0 zone resets 00:19:14.483 clat (usec): min=32, max=4227, avg=48.40, stdev=82.53 00:19:14.483 lat (usec): min=32, max=4228, avg=48.87, stdev=82.55 00:19:14.483 clat percentiles (usec): 00:19:14.483 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 40], 20.00th=[ 41], 00:19:14.483 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 45], 00:19:14.483 | 70.00th=[ 47], 80.00th=[ 49], 90.00th=[ 53], 95.00th=[ 61], 00:19:14.483 | 99.00th=[ 75], 99.50th=[ 83], 99.90th=[ 1270], 99.95th=[ 2409], 00:19:14.483 | 99.99th=[ 3425] 00:19:14.483 bw ( KiB/s): min=59416, max=86448, per=99.91%, avg=81196.63, stdev=6091.39, samples=19 00:19:14.483 iops : min=14854, max=21612, avg=20299.16, stdev=1522.51, samples=19 00:19:14.483 lat (usec) : 50=84.58%, 100=15.09%, 250=0.14%, 500=0.06%, 750=0.01% 00:19:14.483 lat (usec) : 1000=0.01% 00:19:14.483 lat (msec) : 2=0.04%, 4=0.07%, 10=0.01% 00:19:14.483 cpu : usr=3.22%, sys=15.30%, ctx=203191, majf=0, minf=796 00:19:14.483 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.483 issued rwts: total=0,203198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.483 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.483 00:19:14.483 Run status group 0 (all jobs): 00:19:14.483 WRITE: bw=79.4MiB/s (83.2MB/s), 79.4MiB/s-79.4MiB/s (83.2MB/s-83.2MB/s), io=794MiB (832MB), run=10001-10001msec 00:19:14.483 00:19:14.483 Disk stats (read/write): 00:19:14.483 ublkb0: ios=0/200978, merge=0/0, ticks=0/8114, in_queue=8114, util=99.08% 00:19:14.483 19:37:41 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:19:14.483 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.483 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.483 [2024-12-05 19:37:41.710707] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:14.740 [2024-12-05 19:37:41.743155] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:14.740 [2024-12-05 19:37:41.744126] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:14.740 [2024-12-05 19:37:41.750699] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:14.740 [2024-12-05 19:37:41.750922] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:14.740 [2024-12-05 19:37:41.750935] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:14.740 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.740 19:37:41 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:19:14.740 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:19:14.740 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:19:14.740 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.741 [2024-12-05 19:37:41.766741] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:19:14.741 request: 00:19:14.741 { 00:19:14.741 "ublk_id": 0, 00:19:14.741 "method": "ublk_stop_disk", 00:19:14.741 "req_id": 1 00:19:14.741 } 00:19:14.741 Got JSON-RPC error response 00:19:14.741 response: 00:19:14.741 { 00:19:14.741 "code": -19, 00:19:14.741 "message": "No such device" 00:19:14.741 } 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:14.741 19:37:41 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.741 [2024-12-05 19:37:41.782742] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:14.741 [2024-12-05 19:37:41.786478] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:14.741 [2024-12-05 19:37:41.786506] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.741 19:37:41 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.741 19:37:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.999 19:37:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.999 19:37:42 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:19:14.999 19:37:42 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:14.999 19:37:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.999 19:37:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.999 19:37:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.999 19:37:42 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:14.999 19:37:42 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:19:14.999 19:37:42 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:14.999 19:37:42 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:14.999 19:37:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.999 19:37:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.999 19:37:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.999 19:37:42 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:14.999 19:37:42 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:19:14.999 19:37:42 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:14.999 00:19:14.999 real 0m11.264s 00:19:14.999 user 0m0.631s 00:19:14.999 sys 0m1.606s 00:19:14.999 19:37:42 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.999 19:37:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.999 ************************************ 00:19:14.999 END TEST test_create_ublk 00:19:14.999 ************************************ 00:19:15.258 19:37:42 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:19:15.258 19:37:42 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:15.258 19:37:42 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.258 19:37:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.258 ************************************ 00:19:15.258 START TEST test_create_multi_ublk 00:19:15.258 ************************************ 00:19:15.258 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:19:15.258 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:19:15.258 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.258 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.258 [2024-12-05 19:37:42.293682] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:15.258 [2024-12-05 19:37:42.295217] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:15.258 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.258 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:19:15.258 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:19:15.258 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:15.258 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:19:15.258 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.258 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.516 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.516 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:19:15.516 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:15.516 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.516 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.516 [2024-12-05 19:37:42.522792] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:15.516 [2024-12-05 19:37:42.523086] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:15.516 [2024-12-05 19:37:42.523098] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:15.516 [2024-12-05 19:37:42.523106] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:15.516 [2024-12-05 19:37:42.534722] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:15.516 [2024-12-05 19:37:42.534737] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:15.516 [2024-12-05 19:37:42.546686] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:15.516 [2024-12-05 19:37:42.547167] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:15.516 [2024-12-05 19:37:42.594690] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:15.516 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.516 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:19:15.516 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:15.516 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:19:15.516 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.516 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.774 [2024-12-05 19:37:42.803780] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:19:15.774 [2024-12-05 19:37:42.804065] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:19:15.774 [2024-12-05 19:37:42.804078] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:15.774 [2024-12-05 19:37:42.804082] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:15.774 [2024-12-05 19:37:42.811699] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:15.774 [2024-12-05 19:37:42.811716] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:15.774 [2024-12-05 19:37:42.819700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:15.774 [2024-12-05 19:37:42.820183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:15.774 [2024-12-05 19:37:42.828717] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.774 19:37:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.774 [2024-12-05 19:37:42.980082] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:19:15.774 [2024-12-05 19:37:42.980370] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:19:15.774 [2024-12-05 19:37:42.980382] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:19:15.774 [2024-12-05 19:37:42.980388] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:19:15.774 [2024-12-05 19:37:42.987700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:15.774 [2024-12-05 19:37:42.987718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:15.774 [2024-12-05 19:37:42.995693] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:15.774 [2024-12-05 19:37:42.996178] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:19:15.774 [2024-12-05 19:37:43.002727] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:19:15.774 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.774 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:19:15.774 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:15.774 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:19:15.774 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.774 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.033 [2024-12-05 19:37:43.162789] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:19:16.033 [2024-12-05 19:37:43.163080] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:19:16.033 [2024-12-05 19:37:43.163093] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:19:16.033 [2024-12-05 19:37:43.163098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:19:16.033 [2024-12-05 19:37:43.170699] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:16.033 [2024-12-05 19:37:43.170715] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:16.033 [2024-12-05 19:37:43.178695] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:16.033 [2024-12-05 19:37:43.179178] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:19:16.033 [2024-12-05 19:37:43.182395] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:19:16.033 { 00:19:16.033 "ublk_device": "/dev/ublkb0", 00:19:16.033 "id": 0, 00:19:16.033 "queue_depth": 512, 00:19:16.033 "num_queues": 4, 00:19:16.033 "bdev_name": "Malloc0" 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "ublk_device": "/dev/ublkb1", 00:19:16.033 "id": 1, 00:19:16.033 "queue_depth": 512, 00:19:16.033 "num_queues": 4, 00:19:16.033 "bdev_name": "Malloc1" 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "ublk_device": "/dev/ublkb2", 00:19:16.033 "id": 2, 00:19:16.033 "queue_depth": 512, 00:19:16.033 "num_queues": 4, 00:19:16.033 "bdev_name": "Malloc2" 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "ublk_device": "/dev/ublkb3", 00:19:16.033 "id": 3, 00:19:16.033 "queue_depth": 512, 00:19:16.033 "num_queues": 4, 00:19:16.033 "bdev_name": "Malloc3" 00:19:16.033 } 00:19:16.033 ]' 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:19:16.033 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:19:16.331 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:19:16.594 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:16.594 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:19:16.594 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:16.594 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:19:16.594 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:19:16.594 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.594 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:19:16.594 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:19:16.594 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:19:16.594 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:19:16.595 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:19:16.595 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:16.595 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:19:16.595 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:16.595 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:19:16.595 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:19:16.595 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:19:16.595 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:19:16.595 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.595 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:19:16.595 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.595 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.595 [2024-12-05 19:37:43.822761] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:16.852 [2024-12-05 19:37:43.856166] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:16.852 [2024-12-05 19:37:43.857232] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:16.852 [2024-12-05 19:37:43.862701] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:16.852 [2024-12-05 19:37:43.862920] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:16.852 [2024-12-05 19:37:43.862933] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:16.852 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.852 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.852 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:19:16.852 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.852 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.852 [2024-12-05 19:37:43.878754] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:16.852 [2024-12-05 19:37:43.910698] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:16.852 [2024-12-05 19:37:43.911355] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:16.852 [2024-12-05 19:37:43.918704] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:16.852 [2024-12-05 19:37:43.918950] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:16.852 [2024-12-05 19:37:43.918962] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:16.852 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.852 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.852 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:19:16.852 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.852 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.852 [2024-12-05 19:37:43.934755] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:19:16.852 [2024-12-05 19:37:43.966041] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:16.852 [2024-12-05 19:37:43.967112] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:19:16.852 [2024-12-05 19:37:43.974706] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:16.852 [2024-12-05 19:37:43.974929] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:19:16.852 [2024-12-05 19:37:43.974941] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:19:16.852 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.852 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.853 19:37:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:19:16.853 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.853 19:37:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.853 [2024-12-05 19:37:43.990748] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:19:16.853 [2024-12-05 19:37:44.030723] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:16.853 [2024-12-05 19:37:44.031287] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:19:16.853 [2024-12-05 19:37:44.039717] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:16.853 [2024-12-05 19:37:44.039947] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:19:16.853 [2024-12-05 19:37:44.039954] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:19:16.853 19:37:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.853 19:37:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:19:17.110 [2024-12-05 19:37:44.230744] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:17.110 [2024-12-05 19:37:44.234415] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:17.110 [2024-12-05 19:37:44.234443] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:17.110 19:37:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:19:17.110 19:37:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:17.110 19:37:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:17.110 19:37:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.110 19:37:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:17.369 19:37:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.369 19:37:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:17.369 19:37:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:17.369 19:37:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.369 19:37:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:17.934 19:37:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.934 19:37:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:17.934 19:37:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:17.934 19:37:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.934 19:37:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:17.934 19:37:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.935 19:37:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:17.935 19:37:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:19:17.935 19:37:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.935 19:37:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:18.192 00:19:18.192 real 0m3.165s 00:19:18.192 user 0m0.828s 00:19:18.192 sys 0m0.117s 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.192 19:37:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:18.192 ************************************ 00:19:18.192 END TEST test_create_multi_ublk 00:19:18.192 ************************************ 00:19:18.450 19:37:45 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:18.450 19:37:45 ublk -- ublk/ublk.sh@147 -- # cleanup 00:19:18.450 19:37:45 ublk -- ublk/ublk.sh@130 -- # killprocess 74057 00:19:18.450 19:37:45 ublk -- common/autotest_common.sh@954 -- # '[' -z 74057 ']' 00:19:18.450 19:37:45 ublk -- common/autotest_common.sh@958 -- # kill -0 74057 00:19:18.450 19:37:45 ublk -- common/autotest_common.sh@959 -- # uname 00:19:18.450 19:37:45 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.450 19:37:45 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74057 00:19:18.450 19:37:45 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.450 19:37:45 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.450 killing process with pid 74057 00:19:18.450 19:37:45 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74057' 00:19:18.450 19:37:45 ublk -- common/autotest_common.sh@973 -- # kill 74057 00:19:18.450 19:37:45 ublk -- common/autotest_common.sh@978 -- # wait 74057 00:19:19.017 [2024-12-05 19:37:46.029146] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:19.017 [2024-12-05 19:37:46.029197] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:19.585 00:19:19.585 real 0m24.498s 00:19:19.585 user 0m34.614s 00:19:19.585 sys 0m10.177s 00:19:19.585 19:37:46 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.585 19:37:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:19.585 ************************************ 00:19:19.585 END TEST ublk 00:19:19.585 ************************************ 00:19:19.585 19:37:46 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:19.585 19:37:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:19.585 19:37:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.585 19:37:46 -- common/autotest_common.sh@10 -- # set +x 00:19:19.585 ************************************ 00:19:19.585 START TEST ublk_recovery 00:19:19.585 ************************************ 00:19:19.585 19:37:46 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:19.585 * Looking for test storage... 00:19:19.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:19.585 19:37:46 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:19.585 19:37:46 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:19:19.585 19:37:46 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:19.844 19:37:46 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:19.844 19:37:46 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:19:19.844 19:37:46 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:19.844 19:37:46 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:19.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.844 --rc genhtml_branch_coverage=1 00:19:19.844 --rc genhtml_function_coverage=1 00:19:19.844 --rc genhtml_legend=1 00:19:19.844 --rc geninfo_all_blocks=1 00:19:19.844 --rc geninfo_unexecuted_blocks=1 00:19:19.844 00:19:19.844 ' 00:19:19.844 19:37:46 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:19.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.844 --rc genhtml_branch_coverage=1 00:19:19.844 --rc genhtml_function_coverage=1 00:19:19.844 --rc genhtml_legend=1 00:19:19.844 --rc geninfo_all_blocks=1 00:19:19.844 --rc geninfo_unexecuted_blocks=1 00:19:19.844 00:19:19.844 ' 00:19:19.844 19:37:46 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:19.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.844 --rc genhtml_branch_coverage=1 00:19:19.844 --rc genhtml_function_coverage=1 00:19:19.844 --rc genhtml_legend=1 00:19:19.844 --rc geninfo_all_blocks=1 00:19:19.844 --rc geninfo_unexecuted_blocks=1 00:19:19.844 00:19:19.844 ' 00:19:19.844 19:37:46 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:19.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.844 --rc genhtml_branch_coverage=1 00:19:19.844 --rc genhtml_function_coverage=1 00:19:19.844 --rc genhtml_legend=1 00:19:19.844 --rc geninfo_all_blocks=1 00:19:19.844 --rc geninfo_unexecuted_blocks=1 00:19:19.844 00:19:19.844 ' 00:19:19.844 19:37:46 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:19.844 19:37:46 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:19.844 19:37:46 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:19.844 19:37:46 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:19.844 19:37:46 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:19.844 19:37:46 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:19.844 19:37:46 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:19.844 19:37:46 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:19.844 19:37:46 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:19.844 19:37:46 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:19:19.844 19:37:46 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74450 00:19:19.844 19:37:46 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.844 19:37:46 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74450 00:19:19.844 19:37:46 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:19.844 19:37:46 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74450 ']' 00:19:19.844 19:37:46 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.844 19:37:46 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.844 19:37:46 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.844 19:37:46 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.844 19:37:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.844 [2024-12-05 19:37:46.940461] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:19:19.844 [2024-12-05 19:37:46.940587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74450 ] 00:19:20.103 [2024-12-05 19:37:47.099111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:20.103 [2024-12-05 19:37:47.184799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.103 [2024-12-05 19:37:47.184837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.670 19:37:47 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.670 19:37:47 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:19:20.670 19:37:47 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:19:20.670 19:37:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.670 19:37:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.670 [2024-12-05 19:37:47.768714] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:20.670 [2024-12-05 19:37:47.770554] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:20.670 19:37:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.670 19:37:47 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:20.670 19:37:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.670 19:37:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.670 malloc0 00:19:20.670 19:37:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.670 19:37:47 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:19:20.670 19:37:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.670 19:37:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.670 [2024-12-05 19:37:47.877806] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:19:20.671 [2024-12-05 19:37:47.877889] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:19:20.671 [2024-12-05 19:37:47.877903] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:20.671 [2024-12-05 19:37:47.877913] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:20.671 [2024-12-05 19:37:47.886774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:20.671 [2024-12-05 19:37:47.886792] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:20.671 [2024-12-05 19:37:47.893698] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:20.671 [2024-12-05 19:37:47.893827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:20.671 [2024-12-05 19:37:47.915701] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:20.671 1 00:19:20.929 19:37:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.929 19:37:47 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:19:21.866 19:37:48 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74486 00:19:21.866 19:37:48 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:19:21.866 19:37:48 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:19:21.866 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:21.866 fio-3.35 00:19:21.866 Starting 1 process 00:19:27.131 19:37:53 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74450 00:19:27.131 19:37:53 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:19:32.415 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74450 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:19:32.415 19:37:58 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74598 00:19:32.415 19:37:58 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.415 19:37:58 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74598 00:19:32.415 19:37:58 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:32.415 19:37:58 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74598 ']' 00:19:32.415 19:37:58 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.415 19:37:58 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.415 19:37:58 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.415 19:37:58 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.415 19:37:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.415 [2024-12-05 19:37:59.019490] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:19:32.415 [2024-12-05 19:37:59.019618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74598 ] 00:19:32.415 [2024-12-05 19:37:59.175932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:32.415 [2024-12-05 19:37:59.255324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.415 [2024-12-05 19:37:59.255406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.673 19:37:59 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.673 19:37:59 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:19:32.673 19:37:59 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:19:32.673 19:37:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.673 19:37:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.673 [2024-12-05 19:37:59.864687] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:32.673 [2024-12-05 19:37:59.866243] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:32.673 19:37:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.673 19:37:59 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:32.673 19:37:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.673 19:37:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.944 malloc0 00:19:32.944 19:37:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.944 19:37:59 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:19:32.944 19:37:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.944 19:37:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.944 [2024-12-05 19:37:59.952785] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:19:32.944 [2024-12-05 19:37:59.952818] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:32.944 [2024-12-05 19:37:59.952827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:32.944 [2024-12-05 19:37:59.960711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:32.944 [2024-12-05 19:37:59.960731] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:19:32.944 [2024-12-05 19:37:59.960738] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:19:32.944 [2024-12-05 19:37:59.960800] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:19:32.944 1 00:19:32.944 19:37:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.944 19:37:59 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74486 00:19:32.944 [2024-12-05 19:37:59.968687] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:19:32.944 [2024-12-05 19:37:59.974989] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:19:32.944 [2024-12-05 19:37:59.982849] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:19:32.944 [2024-12-05 19:37:59.982868] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:20:29.211 00:20:29.211 fio_test: (groupid=0, jobs=1): err= 0: pid=74489: Thu Dec 5 19:38:49 2024 00:20:29.211 read: IOPS=28.4k, BW=111MiB/s (116MB/s)(6655MiB/60001msec) 00:20:29.211 slat (nsec): min=884, max=385454, avg=4857.23, stdev=1588.47 00:20:29.211 clat (usec): min=566, max=6059.8k, avg=2212.03, stdev=37127.26 00:20:29.211 lat (usec): min=570, max=6059.8k, avg=2216.89, stdev=37127.26 00:20:29.211 clat percentiles (usec): 00:20:29.211 | 1.00th=[ 1614], 5.00th=[ 1762], 10.00th=[ 1778], 20.00th=[ 1811], 00:20:29.211 | 30.00th=[ 1827], 40.00th=[ 1844], 50.00th=[ 1860], 60.00th=[ 1876], 00:20:29.211 | 70.00th=[ 1893], 80.00th=[ 1909], 90.00th=[ 2024], 95.00th=[ 2868], 00:20:29.211 | 99.00th=[ 4817], 99.50th=[ 5276], 99.90th=[ 6521], 99.95th=[ 7635], 00:20:29.211 | 99.99th=[13042] 00:20:29.211 bw ( KiB/s): min=17768, max=131392, per=100.00%, avg=125087.06, stdev=15448.39, samples=108 00:20:29.211 iops : min= 4442, max=32848, avg=31271.76, stdev=3862.10, samples=108 00:20:29.211 write: IOPS=28.4k, BW=111MiB/s (116MB/s)(6650MiB/60001msec); 0 zone resets 00:20:29.211 slat (nsec): min=929, max=360792, avg=4897.51, stdev=1605.28 00:20:29.211 clat (usec): min=611, max=6060.0k, avg=2286.75, stdev=37141.98 00:20:29.211 lat (usec): min=616, max=6060.0k, avg=2291.64, stdev=37141.97 00:20:29.211 clat percentiles (usec): 00:20:29.211 | 1.00th=[ 1647], 5.00th=[ 1827], 10.00th=[ 1860], 20.00th=[ 1893], 00:20:29.211 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1942], 60.00th=[ 1958], 00:20:29.211 | 70.00th=[ 1975], 80.00th=[ 1991], 90.00th=[ 2089], 95.00th=[ 2802], 00:20:29.211 | 99.00th=[ 4817], 99.50th=[ 5342], 99.90th=[ 6652], 99.95th=[ 7767], 00:20:29.211 | 99.99th=[13042] 00:20:29.211 bw ( KiB/s): min=17920, max=131072, per=100.00%, avg=124992.05, stdev=15447.83, samples=108 00:20:29.211 iops : min= 4480, max=32768, avg=31248.01, stdev=3861.96, samples=108 00:20:29.211 lat (usec) : 750=0.01%, 1000=0.01% 00:20:29.211 lat (msec) : 2=85.53%, 4=12.01%, 10=2.43%, 20=0.02%, >=2000=0.01% 00:20:29.211 cpu : usr=6.43%, sys=28.40%, ctx=115713, majf=0, minf=14 00:20:29.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:29.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:29.211 issued rwts: total=1703788,1702431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:29.211 00:20:29.211 Run status group 0 (all jobs): 00:20:29.211 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=6655MiB (6979MB), run=60001-60001msec 00:20:29.211 WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=6650MiB (6973MB), run=60001-60001msec 00:20:29.211 00:20:29.211 Disk stats (read/write): 00:20:29.211 ublkb1: ios=1700357/1698950, merge=0/0, ticks=3674503/3663844, in_queue=7338347, util=99.89% 00:20:29.211 19:38:49 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.211 [2024-12-05 19:38:49.181319] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:29.211 [2024-12-05 19:38:49.217708] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:29.211 [2024-12-05 19:38:49.217848] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:29.211 [2024-12-05 19:38:49.229713] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:29.211 [2024-12-05 19:38:49.229797] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:29.211 [2024-12-05 19:38:49.229806] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.211 19:38:49 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.211 [2024-12-05 19:38:49.244756] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:29.211 [2024-12-05 19:38:49.248401] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:29.211 [2024-12-05 19:38:49.248429] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.211 19:38:49 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:20:29.211 19:38:49 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:20:29.211 19:38:49 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74598 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74598 ']' 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74598 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74598 00:20:29.211 killing process with pid 74598 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74598' 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74598 00:20:29.211 19:38:49 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74598 00:20:29.211 [2024-12-05 19:38:50.388704] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:29.211 [2024-12-05 19:38:50.388756] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:29.211 00:20:29.211 real 1m4.603s 00:20:29.211 user 1m43.156s 00:20:29.211 sys 0m36.314s 00:20:29.211 19:38:51 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.211 19:38:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.211 ************************************ 00:20:29.211 END TEST ublk_recovery 00:20:29.211 ************************************ 00:20:29.211 19:38:51 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:20:29.211 19:38:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:29.211 19:38:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:29.211 19:38:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.211 19:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:29.211 19:38:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:29.211 19:38:51 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:29.211 19:38:51 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:29.211 19:38:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:29.211 19:38:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:29.211 19:38:51 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:29.211 19:38:51 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:29.211 19:38:51 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:29.211 19:38:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:29.211 19:38:51 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:20:29.211 19:38:51 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:29.211 19:38:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:29.211 19:38:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.211 19:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:29.211 ************************************ 00:20:29.211 START TEST ftl 00:20:29.211 ************************************ 00:20:29.211 19:38:51 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:29.211 * Looking for test storage... 00:20:29.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:29.211 19:38:51 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:29.211 19:38:51 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:20:29.211 19:38:51 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:29.211 19:38:51 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:29.211 19:38:51 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.211 19:38:51 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.211 19:38:51 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.211 19:38:51 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.211 19:38:51 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.211 19:38:51 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.211 19:38:51 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.211 19:38:51 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.211 19:38:51 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.211 19:38:51 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.211 19:38:51 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.211 19:38:51 ftl -- scripts/common.sh@344 -- # case "$op" in 00:20:29.211 19:38:51 ftl -- scripts/common.sh@345 -- # : 1 00:20:29.211 19:38:51 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.211 19:38:51 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.211 19:38:51 ftl -- scripts/common.sh@365 -- # decimal 1 00:20:29.211 19:38:51 ftl -- scripts/common.sh@353 -- # local d=1 00:20:29.211 19:38:51 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.211 19:38:51 ftl -- scripts/common.sh@355 -- # echo 1 00:20:29.211 19:38:51 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.211 19:38:51 ftl -- scripts/common.sh@366 -- # decimal 2 00:20:29.211 19:38:51 ftl -- scripts/common.sh@353 -- # local d=2 00:20:29.212 19:38:51 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.212 19:38:51 ftl -- scripts/common.sh@355 -- # echo 2 00:20:29.212 19:38:51 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.212 19:38:51 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.212 19:38:51 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.212 19:38:51 ftl -- scripts/common.sh@368 -- # return 0 00:20:29.212 19:38:51 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.212 19:38:51 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:29.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.212 --rc genhtml_branch_coverage=1 00:20:29.212 --rc genhtml_function_coverage=1 00:20:29.212 --rc genhtml_legend=1 00:20:29.212 --rc geninfo_all_blocks=1 00:20:29.212 --rc geninfo_unexecuted_blocks=1 00:20:29.212 00:20:29.212 ' 00:20:29.212 19:38:51 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:29.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.212 --rc genhtml_branch_coverage=1 00:20:29.212 --rc genhtml_function_coverage=1 00:20:29.212 --rc genhtml_legend=1 00:20:29.212 --rc geninfo_all_blocks=1 00:20:29.212 --rc geninfo_unexecuted_blocks=1 00:20:29.212 00:20:29.212 ' 00:20:29.212 19:38:51 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:29.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.212 --rc genhtml_branch_coverage=1 00:20:29.212 --rc genhtml_function_coverage=1 00:20:29.212 --rc genhtml_legend=1 00:20:29.212 --rc geninfo_all_blocks=1 00:20:29.212 --rc geninfo_unexecuted_blocks=1 00:20:29.212 00:20:29.212 ' 00:20:29.212 19:38:51 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:29.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.212 --rc genhtml_branch_coverage=1 00:20:29.212 --rc genhtml_function_coverage=1 00:20:29.212 --rc genhtml_legend=1 00:20:29.212 --rc geninfo_all_blocks=1 00:20:29.212 --rc geninfo_unexecuted_blocks=1 00:20:29.212 00:20:29.212 ' 00:20:29.212 19:38:51 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:29.212 19:38:51 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:29.212 19:38:51 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:29.212 19:38:51 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:29.212 19:38:51 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:29.212 19:38:51 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:29.212 19:38:51 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.212 19:38:51 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:29.212 19:38:51 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:29.212 19:38:51 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.212 19:38:51 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.212 19:38:51 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:29.212 19:38:51 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:29.212 19:38:51 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:29.212 19:38:51 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:29.212 19:38:51 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:29.212 19:38:51 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:29.212 19:38:51 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.212 19:38:51 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.212 19:38:51 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:29.212 19:38:51 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:29.212 19:38:51 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:29.212 19:38:51 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:29.212 19:38:51 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:29.212 19:38:51 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:29.212 19:38:51 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:29.212 19:38:51 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:29.212 19:38:51 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:29.212 19:38:51 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:29.212 19:38:51 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.212 19:38:51 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:20:29.212 19:38:51 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:20:29.212 19:38:51 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:20:29.212 19:38:51 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:20:29.212 19:38:51 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:29.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:29.212 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:29.212 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:29.212 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:29.212 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:29.212 19:38:52 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75403 00:20:29.212 19:38:52 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75403 00:20:29.212 19:38:52 ftl -- common/autotest_common.sh@835 -- # '[' -z 75403 ']' 00:20:29.212 19:38:52 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.212 19:38:52 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.212 19:38:52 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.212 19:38:52 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.212 19:38:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:29.212 19:38:52 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:20:29.212 [2024-12-05 19:38:52.098862] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:20:29.212 [2024-12-05 19:38:52.098982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75403 ] 00:20:29.212 [2024-12-05 19:38:52.255388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.212 [2024-12-05 19:38:52.348257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.212 19:38:52 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.212 19:38:52 ftl -- common/autotest_common.sh@868 -- # return 0 00:20:29.212 19:38:52 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:20:29.212 19:38:53 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:29.212 19:38:53 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:20:29.212 19:38:53 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@50 -- # break 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@63 -- # break 00:20:29.212 19:38:54 ftl -- ftl/ftl.sh@66 -- # killprocess 75403 00:20:29.212 19:38:54 ftl -- common/autotest_common.sh@954 -- # '[' -z 75403 ']' 00:20:29.212 19:38:54 ftl -- common/autotest_common.sh@958 -- # kill -0 75403 00:20:29.212 19:38:54 ftl -- common/autotest_common.sh@959 -- # uname 00:20:29.212 19:38:54 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.212 19:38:54 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75403 00:20:29.212 19:38:54 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:29.212 killing process with pid 75403 00:20:29.212 19:38:54 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:29.212 19:38:54 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75403' 00:20:29.212 19:38:54 ftl -- common/autotest_common.sh@973 -- # kill 75403 00:20:29.212 19:38:54 ftl -- common/autotest_common.sh@978 -- # wait 75403 00:20:29.212 19:38:56 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:20:29.212 19:38:56 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:29.212 19:38:56 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:29.212 19:38:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.212 19:38:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:29.212 ************************************ 00:20:29.212 START TEST ftl_fio_basic 00:20:29.212 ************************************ 00:20:29.212 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:29.212 * Looking for test storage... 00:20:29.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:29.212 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:29.212 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:20:29.212 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:29.212 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:29.212 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.212 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.212 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.212 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:29.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.213 --rc genhtml_branch_coverage=1 00:20:29.213 --rc genhtml_function_coverage=1 00:20:29.213 --rc genhtml_legend=1 00:20:29.213 --rc geninfo_all_blocks=1 00:20:29.213 --rc geninfo_unexecuted_blocks=1 00:20:29.213 00:20:29.213 ' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:29.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.213 --rc genhtml_branch_coverage=1 00:20:29.213 --rc genhtml_function_coverage=1 00:20:29.213 --rc genhtml_legend=1 00:20:29.213 --rc geninfo_all_blocks=1 00:20:29.213 --rc geninfo_unexecuted_blocks=1 00:20:29.213 00:20:29.213 ' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:29.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.213 --rc genhtml_branch_coverage=1 00:20:29.213 --rc genhtml_function_coverage=1 00:20:29.213 --rc genhtml_legend=1 00:20:29.213 --rc geninfo_all_blocks=1 00:20:29.213 --rc geninfo_unexecuted_blocks=1 00:20:29.213 00:20:29.213 ' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:29.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.213 --rc genhtml_branch_coverage=1 00:20:29.213 --rc genhtml_function_coverage=1 00:20:29.213 --rc genhtml_legend=1 00:20:29.213 --rc geninfo_all_blocks=1 00:20:29.213 --rc geninfo_unexecuted_blocks=1 00:20:29.213 00:20:29.213 ' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75530 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75530 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75530 ']' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.213 19:38:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:29.213 [2024-12-05 19:38:56.320279] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:20:29.213 [2024-12-05 19:38:56.320396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75530 ] 00:20:29.473 [2024-12-05 19:38:56.475141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:29.473 [2024-12-05 19:38:56.552424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.473 [2024-12-05 19:38:56.552631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.473 [2024-12-05 19:38:56.552634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.040 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.041 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:20:30.041 19:38:57 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:30.041 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:20:30.041 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:30.041 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:20:30.041 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:20:30.041 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:30.299 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:30.300 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:20:30.300 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:30.300 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:30.300 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:30.300 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:30.300 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:30.300 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:30.559 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:30.559 { 00:20:30.559 "name": "nvme0n1", 00:20:30.559 "aliases": [ 00:20:30.559 "fa278862-cebe-48c5-a374-a09af7f3f3d0" 00:20:30.559 ], 00:20:30.559 "product_name": "NVMe disk", 00:20:30.559 "block_size": 4096, 00:20:30.559 "num_blocks": 1310720, 00:20:30.559 "uuid": "fa278862-cebe-48c5-a374-a09af7f3f3d0", 00:20:30.559 "numa_id": -1, 00:20:30.559 "assigned_rate_limits": { 00:20:30.559 "rw_ios_per_sec": 0, 00:20:30.559 "rw_mbytes_per_sec": 0, 00:20:30.559 "r_mbytes_per_sec": 0, 00:20:30.559 "w_mbytes_per_sec": 0 00:20:30.559 }, 00:20:30.559 "claimed": false, 00:20:30.559 "zoned": false, 00:20:30.559 "supported_io_types": { 00:20:30.559 "read": true, 00:20:30.559 "write": true, 00:20:30.559 "unmap": true, 00:20:30.559 "flush": true, 00:20:30.559 "reset": true, 00:20:30.559 "nvme_admin": true, 00:20:30.559 "nvme_io": true, 00:20:30.559 "nvme_io_md": false, 00:20:30.559 "write_zeroes": true, 00:20:30.559 "zcopy": false, 00:20:30.559 "get_zone_info": false, 00:20:30.559 "zone_management": false, 00:20:30.559 "zone_append": false, 00:20:30.559 "compare": true, 00:20:30.559 "compare_and_write": false, 00:20:30.559 "abort": true, 00:20:30.559 "seek_hole": false, 00:20:30.559 "seek_data": false, 00:20:30.559 "copy": true, 00:20:30.559 "nvme_iov_md": false 00:20:30.559 }, 00:20:30.559 "driver_specific": { 00:20:30.559 "nvme": [ 00:20:30.559 { 00:20:30.559 "pci_address": "0000:00:11.0", 00:20:30.559 "trid": { 00:20:30.559 "trtype": "PCIe", 00:20:30.559 "traddr": "0000:00:11.0" 00:20:30.559 }, 00:20:30.559 "ctrlr_data": { 00:20:30.559 "cntlid": 0, 00:20:30.559 "vendor_id": "0x1b36", 00:20:30.559 "model_number": "QEMU NVMe Ctrl", 00:20:30.559 "serial_number": "12341", 00:20:30.559 "firmware_revision": "8.0.0", 00:20:30.559 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:30.559 "oacs": { 00:20:30.559 "security": 0, 00:20:30.559 "format": 1, 00:20:30.559 "firmware": 0, 00:20:30.559 "ns_manage": 1 00:20:30.559 }, 00:20:30.559 "multi_ctrlr": false, 00:20:30.559 "ana_reporting": false 00:20:30.559 }, 00:20:30.559 "vs": { 00:20:30.559 "nvme_version": "1.4" 00:20:30.559 }, 00:20:30.559 "ns_data": { 00:20:30.559 "id": 1, 00:20:30.559 "can_share": false 00:20:30.559 } 00:20:30.559 } 00:20:30.559 ], 00:20:30.559 "mp_policy": "active_passive" 00:20:30.559 } 00:20:30.559 } 00:20:30.559 ]' 00:20:30.559 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:30.559 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:30.559 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:30.559 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:30.559 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:30.559 19:38:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:20:30.559 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:20:30.559 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:30.559 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:20:30.559 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:30.559 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:30.816 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:20:30.816 19:38:57 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:31.078 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=75649d47-23fc-4eb2-9bf2-9bcda1f63079 00:20:31.078 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 75649d47-23fc-4eb2-9bf2-9bcda1f63079 00:20:31.078 19:38:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=72385c68-bd8a-4444-a4c2-e639284306e9 00:20:31.078 19:38:58 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 72385c68-bd8a-4444-a4c2-e639284306e9 00:20:31.078 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:20:31.078 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:31.078 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=72385c68-bd8a-4444-a4c2-e639284306e9 00:20:31.078 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:20:31.078 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 72385c68-bd8a-4444-a4c2-e639284306e9 00:20:31.079 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=72385c68-bd8a-4444-a4c2-e639284306e9 00:20:31.079 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:31.079 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:31.079 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:31.079 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 72385c68-bd8a-4444-a4c2-e639284306e9 00:20:31.336 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:31.336 { 00:20:31.336 "name": "72385c68-bd8a-4444-a4c2-e639284306e9", 00:20:31.336 "aliases": [ 00:20:31.336 "lvs/nvme0n1p0" 00:20:31.336 ], 00:20:31.336 "product_name": "Logical Volume", 00:20:31.336 "block_size": 4096, 00:20:31.336 "num_blocks": 26476544, 00:20:31.336 "uuid": "72385c68-bd8a-4444-a4c2-e639284306e9", 00:20:31.336 "assigned_rate_limits": { 00:20:31.336 "rw_ios_per_sec": 0, 00:20:31.336 "rw_mbytes_per_sec": 0, 00:20:31.336 "r_mbytes_per_sec": 0, 00:20:31.336 "w_mbytes_per_sec": 0 00:20:31.336 }, 00:20:31.336 "claimed": false, 00:20:31.336 "zoned": false, 00:20:31.336 "supported_io_types": { 00:20:31.336 "read": true, 00:20:31.336 "write": true, 00:20:31.336 "unmap": true, 00:20:31.336 "flush": false, 00:20:31.336 "reset": true, 00:20:31.336 "nvme_admin": false, 00:20:31.336 "nvme_io": false, 00:20:31.336 "nvme_io_md": false, 00:20:31.337 "write_zeroes": true, 00:20:31.337 "zcopy": false, 00:20:31.337 "get_zone_info": false, 00:20:31.337 "zone_management": false, 00:20:31.337 "zone_append": false, 00:20:31.337 "compare": false, 00:20:31.337 "compare_and_write": false, 00:20:31.337 "abort": false, 00:20:31.337 "seek_hole": true, 00:20:31.337 "seek_data": true, 00:20:31.337 "copy": false, 00:20:31.337 "nvme_iov_md": false 00:20:31.337 }, 00:20:31.337 "driver_specific": { 00:20:31.337 "lvol": { 00:20:31.337 "lvol_store_uuid": "75649d47-23fc-4eb2-9bf2-9bcda1f63079", 00:20:31.337 "base_bdev": "nvme0n1", 00:20:31.337 "thin_provision": true, 00:20:31.337 "num_allocated_clusters": 0, 00:20:31.337 "snapshot": false, 00:20:31.337 "clone": false, 00:20:31.337 "esnap_clone": false 00:20:31.337 } 00:20:31.337 } 00:20:31.337 } 00:20:31.337 ]' 00:20:31.337 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:31.337 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:31.337 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:31.337 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:31.337 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:31.337 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:31.337 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:20:31.337 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:20:31.337 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:31.594 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:31.594 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:31.594 19:38:58 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 72385c68-bd8a-4444-a4c2-e639284306e9 00:20:31.594 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=72385c68-bd8a-4444-a4c2-e639284306e9 00:20:31.594 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:31.594 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:31.594 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:31.594 19:38:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 72385c68-bd8a-4444-a4c2-e639284306e9 00:20:31.853 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:31.853 { 00:20:31.853 "name": "72385c68-bd8a-4444-a4c2-e639284306e9", 00:20:31.853 "aliases": [ 00:20:31.853 "lvs/nvme0n1p0" 00:20:31.853 ], 00:20:31.853 "product_name": "Logical Volume", 00:20:31.853 "block_size": 4096, 00:20:31.853 "num_blocks": 26476544, 00:20:31.853 "uuid": "72385c68-bd8a-4444-a4c2-e639284306e9", 00:20:31.853 "assigned_rate_limits": { 00:20:31.853 "rw_ios_per_sec": 0, 00:20:31.853 "rw_mbytes_per_sec": 0, 00:20:31.853 "r_mbytes_per_sec": 0, 00:20:31.853 "w_mbytes_per_sec": 0 00:20:31.853 }, 00:20:31.853 "claimed": false, 00:20:31.853 "zoned": false, 00:20:31.853 "supported_io_types": { 00:20:31.853 "read": true, 00:20:31.853 "write": true, 00:20:31.853 "unmap": true, 00:20:31.853 "flush": false, 00:20:31.853 "reset": true, 00:20:31.853 "nvme_admin": false, 00:20:31.853 "nvme_io": false, 00:20:31.853 "nvme_io_md": false, 00:20:31.853 "write_zeroes": true, 00:20:31.853 "zcopy": false, 00:20:31.853 "get_zone_info": false, 00:20:31.853 "zone_management": false, 00:20:31.853 "zone_append": false, 00:20:31.853 "compare": false, 00:20:31.853 "compare_and_write": false, 00:20:31.853 "abort": false, 00:20:31.853 "seek_hole": true, 00:20:31.853 "seek_data": true, 00:20:31.853 "copy": false, 00:20:31.853 "nvme_iov_md": false 00:20:31.853 }, 00:20:31.853 "driver_specific": { 00:20:31.853 "lvol": { 00:20:31.853 "lvol_store_uuid": "75649d47-23fc-4eb2-9bf2-9bcda1f63079", 00:20:31.853 "base_bdev": "nvme0n1", 00:20:31.853 "thin_provision": true, 00:20:31.853 "num_allocated_clusters": 0, 00:20:31.853 "snapshot": false, 00:20:31.853 "clone": false, 00:20:31.853 "esnap_clone": false 00:20:31.853 } 00:20:31.853 } 00:20:31.853 } 00:20:31.853 ]' 00:20:31.853 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:31.853 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:31.853 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:31.853 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:31.853 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:31.853 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:31.853 19:38:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:20:31.853 19:38:59 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:32.111 19:38:59 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:20:32.111 19:38:59 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:20:32.111 19:38:59 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:20:32.111 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:20:32.111 19:38:59 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 72385c68-bd8a-4444-a4c2-e639284306e9 00:20:32.112 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=72385c68-bd8a-4444-a4c2-e639284306e9 00:20:32.112 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:32.112 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:32.112 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:32.112 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 72385c68-bd8a-4444-a4c2-e639284306e9 00:20:32.373 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:32.373 { 00:20:32.373 "name": "72385c68-bd8a-4444-a4c2-e639284306e9", 00:20:32.373 "aliases": [ 00:20:32.373 "lvs/nvme0n1p0" 00:20:32.373 ], 00:20:32.373 "product_name": "Logical Volume", 00:20:32.373 "block_size": 4096, 00:20:32.373 "num_blocks": 26476544, 00:20:32.373 "uuid": "72385c68-bd8a-4444-a4c2-e639284306e9", 00:20:32.373 "assigned_rate_limits": { 00:20:32.373 "rw_ios_per_sec": 0, 00:20:32.373 "rw_mbytes_per_sec": 0, 00:20:32.373 "r_mbytes_per_sec": 0, 00:20:32.373 "w_mbytes_per_sec": 0 00:20:32.373 }, 00:20:32.373 "claimed": false, 00:20:32.373 "zoned": false, 00:20:32.373 "supported_io_types": { 00:20:32.373 "read": true, 00:20:32.373 "write": true, 00:20:32.373 "unmap": true, 00:20:32.373 "flush": false, 00:20:32.373 "reset": true, 00:20:32.373 "nvme_admin": false, 00:20:32.373 "nvme_io": false, 00:20:32.373 "nvme_io_md": false, 00:20:32.373 "write_zeroes": true, 00:20:32.373 "zcopy": false, 00:20:32.373 "get_zone_info": false, 00:20:32.373 "zone_management": false, 00:20:32.373 "zone_append": false, 00:20:32.373 "compare": false, 00:20:32.373 "compare_and_write": false, 00:20:32.373 "abort": false, 00:20:32.373 "seek_hole": true, 00:20:32.373 "seek_data": true, 00:20:32.373 "copy": false, 00:20:32.373 "nvme_iov_md": false 00:20:32.373 }, 00:20:32.373 "driver_specific": { 00:20:32.373 "lvol": { 00:20:32.373 "lvol_store_uuid": "75649d47-23fc-4eb2-9bf2-9bcda1f63079", 00:20:32.373 "base_bdev": "nvme0n1", 00:20:32.373 "thin_provision": true, 00:20:32.373 "num_allocated_clusters": 0, 00:20:32.374 "snapshot": false, 00:20:32.374 "clone": false, 00:20:32.374 "esnap_clone": false 00:20:32.374 } 00:20:32.374 } 00:20:32.374 } 00:20:32.374 ]' 00:20:32.374 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:32.374 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:32.374 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:32.374 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:32.374 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:32.374 19:38:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:32.374 19:38:59 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:20:32.374 19:38:59 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:20:32.374 19:38:59 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 72385c68-bd8a-4444-a4c2-e639284306e9 -c nvc0n1p0 --l2p_dram_limit 60 00:20:32.634 [2024-12-05 19:38:59.746961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.634 [2024-12-05 19:38:59.746999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:32.634 [2024-12-05 19:38:59.747012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:32.634 [2024-12-05 19:38:59.747019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.634 [2024-12-05 19:38:59.747068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.634 [2024-12-05 19:38:59.747077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.634 [2024-12-05 19:38:59.747086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:32.634 [2024-12-05 19:38:59.747092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.634 [2024-12-05 19:38:59.747121] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:32.634 [2024-12-05 19:38:59.747767] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:32.634 [2024-12-05 19:38:59.747790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.634 [2024-12-05 19:38:59.747797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.634 [2024-12-05 19:38:59.747805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:20:32.634 [2024-12-05 19:38:59.747812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.634 [2024-12-05 19:38:59.747843] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 28324920-f5d7-4da5-b414-031be44a3d51 00:20:32.634 [2024-12-05 19:38:59.748820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.634 [2024-12-05 19:38:59.748933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:32.634 [2024-12-05 19:38:59.748948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:32.634 [2024-12-05 19:38:59.748956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.634 [2024-12-05 19:38:59.753598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.634 [2024-12-05 19:38:59.753628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.634 [2024-12-05 19:38:59.753636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.569 ms 00:20:32.634 [2024-12-05 19:38:59.753643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.634 [2024-12-05 19:38:59.753733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.634 [2024-12-05 19:38:59.753742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.634 [2024-12-05 19:38:59.753749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:32.634 [2024-12-05 19:38:59.753760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.634 [2024-12-05 19:38:59.753793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.634 [2024-12-05 19:38:59.753802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:32.634 [2024-12-05 19:38:59.753809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:32.634 [2024-12-05 19:38:59.753816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.634 [2024-12-05 19:38:59.753837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:32.634 [2024-12-05 19:38:59.756754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.634 [2024-12-05 19:38:59.756777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.634 [2024-12-05 19:38:59.756787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.920 ms 00:20:32.634 [2024-12-05 19:38:59.756795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.634 [2024-12-05 19:38:59.756826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.634 [2024-12-05 19:38:59.756832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:32.634 [2024-12-05 19:38:59.756847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:32.634 [2024-12-05 19:38:59.756853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.634 [2024-12-05 19:38:59.756881] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:32.634 [2024-12-05 19:38:59.757001] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:32.634 [2024-12-05 19:38:59.757013] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:32.634 [2024-12-05 19:38:59.757023] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:32.634 [2024-12-05 19:38:59.757032] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:32.634 [2024-12-05 19:38:59.757039] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:32.634 [2024-12-05 19:38:59.757048] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:32.634 [2024-12-05 19:38:59.757054] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:32.634 [2024-12-05 19:38:59.757061] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:32.634 [2024-12-05 19:38:59.757067] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:32.634 [2024-12-05 19:38:59.757074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.634 [2024-12-05 19:38:59.757081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:32.634 [2024-12-05 19:38:59.757088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:20:32.634 [2024-12-05 19:38:59.757094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.634 [2024-12-05 19:38:59.757165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.634 [2024-12-05 19:38:59.757171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:32.634 [2024-12-05 19:38:59.757178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:32.634 [2024-12-05 19:38:59.757184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.634 [2024-12-05 19:38:59.757278] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:32.634 [2024-12-05 19:38:59.757285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:32.634 [2024-12-05 19:38:59.757294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.634 [2024-12-05 19:38:59.757300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.634 [2024-12-05 19:38:59.757307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:32.634 [2024-12-05 19:38:59.757312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:32.634 [2024-12-05 19:38:59.757319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:32.634 [2024-12-05 19:38:59.757324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:32.634 [2024-12-05 19:38:59.757332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:32.634 [2024-12-05 19:38:59.757337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.634 [2024-12-05 19:38:59.757343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:32.634 [2024-12-05 19:38:59.757349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:32.634 [2024-12-05 19:38:59.757356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.634 [2024-12-05 19:38:59.757361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:32.634 [2024-12-05 19:38:59.757367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:32.634 [2024-12-05 19:38:59.757372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.634 [2024-12-05 19:38:59.757380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:32.634 [2024-12-05 19:38:59.757386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:32.634 [2024-12-05 19:38:59.757392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.634 [2024-12-05 19:38:59.757397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:32.634 [2024-12-05 19:38:59.757403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:32.634 [2024-12-05 19:38:59.757411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.634 [2024-12-05 19:38:59.757418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:32.634 [2024-12-05 19:38:59.757423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:32.634 [2024-12-05 19:38:59.757429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.634 [2024-12-05 19:38:59.757435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:32.634 [2024-12-05 19:38:59.757441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:32.634 [2024-12-05 19:38:59.757447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.634 [2024-12-05 19:38:59.757453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:32.634 [2024-12-05 19:38:59.757458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:32.634 [2024-12-05 19:38:59.757465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.634 [2024-12-05 19:38:59.757470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:32.634 [2024-12-05 19:38:59.757478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:32.634 [2024-12-05 19:38:59.757494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.634 [2024-12-05 19:38:59.757500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:32.634 [2024-12-05 19:38:59.757505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:32.634 [2024-12-05 19:38:59.757512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.634 [2024-12-05 19:38:59.757517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:32.634 [2024-12-05 19:38:59.757523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:32.634 [2024-12-05 19:38:59.757529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.634 [2024-12-05 19:38:59.757535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:32.634 [2024-12-05 19:38:59.757540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:32.634 [2024-12-05 19:38:59.757547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.635 [2024-12-05 19:38:59.757552] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:32.635 [2024-12-05 19:38:59.757559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:32.635 [2024-12-05 19:38:59.757565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.635 [2024-12-05 19:38:59.757572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.635 [2024-12-05 19:38:59.757578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:32.635 [2024-12-05 19:38:59.757586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:32.635 [2024-12-05 19:38:59.757592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:32.635 [2024-12-05 19:38:59.757599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:32.635 [2024-12-05 19:38:59.757604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:32.635 [2024-12-05 19:38:59.757610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:32.635 [2024-12-05 19:38:59.757618] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:32.635 [2024-12-05 19:38:59.757627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.635 [2024-12-05 19:38:59.757633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:32.635 [2024-12-05 19:38:59.757640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:32.635 [2024-12-05 19:38:59.757646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:32.635 [2024-12-05 19:38:59.757653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:32.635 [2024-12-05 19:38:59.757658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:32.635 [2024-12-05 19:38:59.757666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:32.635 [2024-12-05 19:38:59.757682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:32.635 [2024-12-05 19:38:59.757689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:32.635 [2024-12-05 19:38:59.757694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:32.635 [2024-12-05 19:38:59.757702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:32.635 [2024-12-05 19:38:59.757708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:32.635 [2024-12-05 19:38:59.757716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:32.635 [2024-12-05 19:38:59.757722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:32.635 [2024-12-05 19:38:59.757728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:32.635 [2024-12-05 19:38:59.757734] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:32.635 [2024-12-05 19:38:59.757741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.635 [2024-12-05 19:38:59.757749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:32.635 [2024-12-05 19:38:59.757757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:32.635 [2024-12-05 19:38:59.757762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:32.635 [2024-12-05 19:38:59.757773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:32.635 [2024-12-05 19:38:59.757778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.635 [2024-12-05 19:38:59.757786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:32.635 [2024-12-05 19:38:59.757792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:20:32.635 [2024-12-05 19:38:59.757798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.635 [2024-12-05 19:38:59.757858] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:32.635 [2024-12-05 19:38:59.757870] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:35.917 [2024-12-05 19:39:03.065462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.917 [2024-12-05 19:39:03.065516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:35.917 [2024-12-05 19:39:03.065530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3307.590 ms 00:20:35.917 [2024-12-05 19:39:03.065541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.917 [2024-12-05 19:39:03.090312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.917 [2024-12-05 19:39:03.090357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.917 [2024-12-05 19:39:03.090368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.554 ms 00:20:35.917 [2024-12-05 19:39:03.090378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.917 [2024-12-05 19:39:03.090507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.917 [2024-12-05 19:39:03.090520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:35.917 [2024-12-05 19:39:03.090528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:35.917 [2024-12-05 19:39:03.090539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.917 [2024-12-05 19:39:03.135739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.917 [2024-12-05 19:39:03.135878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.917 [2024-12-05 19:39:03.135902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.141 ms 00:20:35.917 [2024-12-05 19:39:03.135912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.917 [2024-12-05 19:39:03.135952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.917 [2024-12-05 19:39:03.135962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.917 [2024-12-05 19:39:03.135971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:35.917 [2024-12-05 19:39:03.135980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.917 [2024-12-05 19:39:03.136321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.917 [2024-12-05 19:39:03.136349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.917 [2024-12-05 19:39:03.136359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:20:35.917 [2024-12-05 19:39:03.136370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.917 [2024-12-05 19:39:03.136487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.917 [2024-12-05 19:39:03.136502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.917 [2024-12-05 19:39:03.136510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:35.917 [2024-12-05 19:39:03.136521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.917 [2024-12-05 19:39:03.150872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.917 [2024-12-05 19:39:03.150905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.917 [2024-12-05 19:39:03.150916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.328 ms 00:20:35.917 [2024-12-05 19:39:03.150927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.917 [2024-12-05 19:39:03.162079] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:36.175 [2024-12-05 19:39:03.175929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.175 [2024-12-05 19:39:03.175959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:36.175 [2024-12-05 19:39:03.175974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.919 ms 00:20:36.175 [2024-12-05 19:39:03.175983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.175 [2024-12-05 19:39:03.219642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.176 [2024-12-05 19:39:03.219689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:36.176 [2024-12-05 19:39:03.219705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.625 ms 00:20:36.176 [2024-12-05 19:39:03.219713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.176 [2024-12-05 19:39:03.219898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.176 [2024-12-05 19:39:03.219908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:36.176 [2024-12-05 19:39:03.219921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:20:36.176 [2024-12-05 19:39:03.219929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.176 [2024-12-05 19:39:03.242806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.176 [2024-12-05 19:39:03.242841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:36.176 [2024-12-05 19:39:03.242854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.827 ms 00:20:36.176 [2024-12-05 19:39:03.242862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.176 [2024-12-05 19:39:03.265196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.176 [2024-12-05 19:39:03.265321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:36.176 [2024-12-05 19:39:03.265341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.289 ms 00:20:36.176 [2024-12-05 19:39:03.265348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.176 [2024-12-05 19:39:03.265929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.176 [2024-12-05 19:39:03.265945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:36.176 [2024-12-05 19:39:03.265956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:20:36.176 [2024-12-05 19:39:03.265963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.176 [2024-12-05 19:39:03.330621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.176 [2024-12-05 19:39:03.330664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:36.176 [2024-12-05 19:39:03.330690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.621 ms 00:20:36.176 [2024-12-05 19:39:03.330701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.176 [2024-12-05 19:39:03.354690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.176 [2024-12-05 19:39:03.354722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:36.176 [2024-12-05 19:39:03.354735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.911 ms 00:20:36.176 [2024-12-05 19:39:03.354743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.176 [2024-12-05 19:39:03.377243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.176 [2024-12-05 19:39:03.377273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:36.176 [2024-12-05 19:39:03.377286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.458 ms 00:20:36.176 [2024-12-05 19:39:03.377293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.176 [2024-12-05 19:39:03.400442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.176 [2024-12-05 19:39:03.400474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:36.176 [2024-12-05 19:39:03.400486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.109 ms 00:20:36.176 [2024-12-05 19:39:03.400493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.176 [2024-12-05 19:39:03.400537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.176 [2024-12-05 19:39:03.400546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:36.176 [2024-12-05 19:39:03.400560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:36.176 [2024-12-05 19:39:03.400567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.176 [2024-12-05 19:39:03.400662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.176 [2024-12-05 19:39:03.400689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:36.176 [2024-12-05 19:39:03.400699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:36.176 [2024-12-05 19:39:03.400706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.176 [2024-12-05 19:39:03.401538] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3654.166 ms, result 0 00:20:36.176 { 00:20:36.176 "name": "ftl0", 00:20:36.176 "uuid": "28324920-f5d7-4da5-b414-031be44a3d51" 00:20:36.176 } 00:20:36.176 19:39:03 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:20:36.176 19:39:03 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:36.176 19:39:03 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:36.176 19:39:03 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:20:36.176 19:39:03 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:36.176 19:39:03 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:36.176 19:39:03 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:36.434 19:39:03 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:36.692 [ 00:20:36.692 { 00:20:36.692 "name": "ftl0", 00:20:36.692 "aliases": [ 00:20:36.692 "28324920-f5d7-4da5-b414-031be44a3d51" 00:20:36.692 ], 00:20:36.692 "product_name": "FTL disk", 00:20:36.692 "block_size": 4096, 00:20:36.692 "num_blocks": 20971520, 00:20:36.692 "uuid": "28324920-f5d7-4da5-b414-031be44a3d51", 00:20:36.692 "assigned_rate_limits": { 00:20:36.692 "rw_ios_per_sec": 0, 00:20:36.692 "rw_mbytes_per_sec": 0, 00:20:36.692 "r_mbytes_per_sec": 0, 00:20:36.692 "w_mbytes_per_sec": 0 00:20:36.692 }, 00:20:36.692 "claimed": false, 00:20:36.692 "zoned": false, 00:20:36.692 "supported_io_types": { 00:20:36.692 "read": true, 00:20:36.692 "write": true, 00:20:36.692 "unmap": true, 00:20:36.692 "flush": true, 00:20:36.692 "reset": false, 00:20:36.692 "nvme_admin": false, 00:20:36.692 "nvme_io": false, 00:20:36.692 "nvme_io_md": false, 00:20:36.692 "write_zeroes": true, 00:20:36.692 "zcopy": false, 00:20:36.692 "get_zone_info": false, 00:20:36.692 "zone_management": false, 00:20:36.692 "zone_append": false, 00:20:36.692 "compare": false, 00:20:36.692 "compare_and_write": false, 00:20:36.692 "abort": false, 00:20:36.692 "seek_hole": false, 00:20:36.692 "seek_data": false, 00:20:36.692 "copy": false, 00:20:36.692 "nvme_iov_md": false 00:20:36.692 }, 00:20:36.692 "driver_specific": { 00:20:36.692 "ftl": { 00:20:36.692 "base_bdev": "72385c68-bd8a-4444-a4c2-e639284306e9", 00:20:36.692 "cache": "nvc0n1p0" 00:20:36.692 } 00:20:36.692 } 00:20:36.692 } 00:20:36.692 ] 00:20:36.692 19:39:03 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:20:36.692 19:39:03 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:20:36.692 19:39:03 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:36.951 19:39:04 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:20:36.951 19:39:04 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:37.211 [2024-12-05 19:39:04.206187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.211 [2024-12-05 19:39:04.206225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.211 [2024-12-05 19:39:04.206236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:37.211 [2024-12-05 19:39:04.206244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.211 [2024-12-05 19:39:04.206269] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.211 [2024-12-05 19:39:04.208340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.211 [2024-12-05 19:39:04.208363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.211 [2024-12-05 19:39:04.208375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.056 ms 00:20:37.211 [2024-12-05 19:39:04.208383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.211 [2024-12-05 19:39:04.208684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.211 [2024-12-05 19:39:04.208699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.211 [2024-12-05 19:39:04.208707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:20:37.211 [2024-12-05 19:39:04.208713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.211 [2024-12-05 19:39:04.211178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.211 [2024-12-05 19:39:04.211260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.211 [2024-12-05 19:39:04.211273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:20:37.211 [2024-12-05 19:39:04.211279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.211 [2024-12-05 19:39:04.216076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.211 [2024-12-05 19:39:04.216102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.211 [2024-12-05 19:39:04.216111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.771 ms 00:20:37.211 [2024-12-05 19:39:04.216117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.211 [2024-12-05 19:39:04.234644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.211 [2024-12-05 19:39:04.234750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.211 [2024-12-05 19:39:04.234776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.466 ms 00:20:37.211 [2024-12-05 19:39:04.234782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.211 [2024-12-05 19:39:04.246658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.211 [2024-12-05 19:39:04.246694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.211 [2024-12-05 19:39:04.246708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.840 ms 00:20:37.211 [2024-12-05 19:39:04.246715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.211 [2024-12-05 19:39:04.246845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.211 [2024-12-05 19:39:04.246857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.211 [2024-12-05 19:39:04.246865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:37.211 [2024-12-05 19:39:04.246871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.211 [2024-12-05 19:39:04.264499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.211 [2024-12-05 19:39:04.264526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:37.211 [2024-12-05 19:39:04.264536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.610 ms 00:20:37.211 [2024-12-05 19:39:04.264541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.211 [2024-12-05 19:39:04.281881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.211 [2024-12-05 19:39:04.281978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:37.211 [2024-12-05 19:39:04.281993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.308 ms 00:20:37.211 [2024-12-05 19:39:04.281999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.211 [2024-12-05 19:39:04.299122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.211 [2024-12-05 19:39:04.299146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.211 [2024-12-05 19:39:04.299156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.090 ms 00:20:37.211 [2024-12-05 19:39:04.299161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.211 [2024-12-05 19:39:04.316187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.211 [2024-12-05 19:39:04.316212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.211 [2024-12-05 19:39:04.316222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.957 ms 00:20:37.211 [2024-12-05 19:39:04.316227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.211 [2024-12-05 19:39:04.316258] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.211 [2024-12-05 19:39:04.316269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.211 [2024-12-05 19:39:04.316410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.212 [2024-12-05 19:39:04.316963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.213 [2024-12-05 19:39:04.316970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.213 [2024-12-05 19:39:04.316982] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.213 [2024-12-05 19:39:04.316989] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 28324920-f5d7-4da5-b414-031be44a3d51 00:20:37.213 [2024-12-05 19:39:04.316995] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:37.213 [2024-12-05 19:39:04.317004] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:37.213 [2024-12-05 19:39:04.317009] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:37.213 [2024-12-05 19:39:04.317018] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:37.213 [2024-12-05 19:39:04.317023] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.213 [2024-12-05 19:39:04.317031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.213 [2024-12-05 19:39:04.317036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.213 [2024-12-05 19:39:04.317043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.213 [2024-12-05 19:39:04.317048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.213 [2024-12-05 19:39:04.317054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.213 [2024-12-05 19:39:04.317061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.213 [2024-12-05 19:39:04.317069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:20:37.213 [2024-12-05 19:39:04.317074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.213 [2024-12-05 19:39:04.326456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.213 [2024-12-05 19:39:04.326483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.213 [2024-12-05 19:39:04.326491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.352 ms 00:20:37.213 [2024-12-05 19:39:04.326497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.213 [2024-12-05 19:39:04.326790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.213 [2024-12-05 19:39:04.326801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.213 [2024-12-05 19:39:04.326809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:20:37.213 [2024-12-05 19:39:04.326815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.213 [2024-12-05 19:39:04.361048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.213 [2024-12-05 19:39:04.361075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.213 [2024-12-05 19:39:04.361085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.213 [2024-12-05 19:39:04.361091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.213 [2024-12-05 19:39:04.361138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.213 [2024-12-05 19:39:04.361144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.213 [2024-12-05 19:39:04.361151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.213 [2024-12-05 19:39:04.361157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.213 [2024-12-05 19:39:04.361221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.213 [2024-12-05 19:39:04.361230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.213 [2024-12-05 19:39:04.361237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.213 [2024-12-05 19:39:04.361243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.213 [2024-12-05 19:39:04.361264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.213 [2024-12-05 19:39:04.361270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.213 [2024-12-05 19:39:04.361277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.213 [2024-12-05 19:39:04.361283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.213 [2024-12-05 19:39:04.423850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.213 [2024-12-05 19:39:04.423988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.213 [2024-12-05 19:39:04.424005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.213 [2024-12-05 19:39:04.424012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.472 [2024-12-05 19:39:04.471802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.472 [2024-12-05 19:39:04.471835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.472 [2024-12-05 19:39:04.471845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.472 [2024-12-05 19:39:04.471851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.472 [2024-12-05 19:39:04.471923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.472 [2024-12-05 19:39:04.471931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.472 [2024-12-05 19:39:04.471940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.472 [2024-12-05 19:39:04.471946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.472 [2024-12-05 19:39:04.471996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.472 [2024-12-05 19:39:04.472003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.472 [2024-12-05 19:39:04.472011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.472 [2024-12-05 19:39:04.472016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.472 [2024-12-05 19:39:04.472095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.472 [2024-12-05 19:39:04.472102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.472 [2024-12-05 19:39:04.472109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.472 [2024-12-05 19:39:04.472116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.472 [2024-12-05 19:39:04.472154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.472 [2024-12-05 19:39:04.472161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:37.472 [2024-12-05 19:39:04.472168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.472 [2024-12-05 19:39:04.472173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.472 [2024-12-05 19:39:04.472208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.472 [2024-12-05 19:39:04.472214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.472 [2024-12-05 19:39:04.472221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.472 [2024-12-05 19:39:04.472228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.472 [2024-12-05 19:39:04.472274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.472 [2024-12-05 19:39:04.472282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.472 [2024-12-05 19:39:04.472289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.472 [2024-12-05 19:39:04.472295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.472 [2024-12-05 19:39:04.472412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 266.208 ms, result 0 00:20:37.472 true 00:20:37.472 19:39:04 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75530 00:20:37.472 19:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75530 ']' 00:20:37.472 19:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75530 00:20:37.472 19:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:20:37.472 19:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.472 19:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75530 00:20:37.472 killing process with pid 75530 00:20:37.472 19:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.472 19:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.472 19:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75530' 00:20:37.472 19:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75530 00:20:37.472 19:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75530 00:20:41.678 19:39:08 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:20:41.678 19:39:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:41.678 19:39:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:20:41.678 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.678 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:41.679 19:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:41.679 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:20:41.679 fio-3.35 00:20:41.679 Starting 1 thread 00:20:46.991 00:20:46.991 test: (groupid=0, jobs=1): err= 0: pid=75720: Thu Dec 5 19:39:13 2024 00:20:46.991 read: IOPS=917, BW=60.9MiB/s (63.9MB/s)(255MiB/4178msec) 00:20:46.991 slat (usec): min=3, max=112, avg= 6.30, stdev= 3.93 00:20:46.991 clat (usec): min=222, max=1398, avg=488.63, stdev=140.53 00:20:46.991 lat (usec): min=226, max=1403, avg=494.93, stdev=141.78 00:20:46.991 clat percentiles (usec): 00:20:46.991 | 1.00th=[ 297], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:20:46.991 | 30.00th=[ 396], 40.00th=[ 469], 50.00th=[ 515], 60.00th=[ 529], 00:20:46.991 | 70.00th=[ 545], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 644], 00:20:46.991 | 99.00th=[ 1012], 99.50th=[ 1106], 99.90th=[ 1336], 99.95th=[ 1352], 00:20:46.991 | 99.99th=[ 1401] 00:20:46.991 write: IOPS=923, BW=61.3MiB/s (64.3MB/s)(256MiB/4175msec); 0 zone resets 00:20:46.991 slat (nsec): min=13634, max=88204, avg=23409.27, stdev=7998.39 00:20:46.991 clat (usec): min=264, max=2611, avg=554.12, stdev=156.08 00:20:46.991 lat (usec): min=283, max=2636, avg=577.53, stdev=159.41 00:20:46.991 clat percentiles (usec): 00:20:46.991 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 375], 00:20:46.991 | 30.00th=[ 482], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 603], 00:20:46.991 | 70.00th=[ 627], 80.00th=[ 644], 90.00th=[ 701], 95.00th=[ 734], 00:20:46.991 | 99.00th=[ 1037], 99.50th=[ 1156], 99.90th=[ 1434], 99.95th=[ 1696], 00:20:46.991 | 99.99th=[ 2606] 00:20:46.991 bw ( KiB/s): min=50184, max=80104, per=100.00%, avg=62934.00, stdev=12213.15, samples=8 00:20:46.991 iops : min= 738, max= 1178, avg=925.50, stdev=179.61, samples=8 00:20:46.991 lat (usec) : 250=0.01%, 500=38.85%, 750=57.21%, 1000=2.67% 00:20:46.991 lat (msec) : 2=1.25%, 4=0.01% 00:20:46.991 cpu : usr=99.14%, sys=0.05%, ctx=26, majf=0, minf=1169 00:20:46.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.991 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:46.991 00:20:46.991 Run status group 0 (all jobs): 00:20:46.991 READ: bw=60.9MiB/s (63.9MB/s), 60.9MiB/s-60.9MiB/s (63.9MB/s-63.9MB/s), io=255MiB (267MB), run=4178-4178msec 00:20:46.991 WRITE: bw=61.3MiB/s (64.3MB/s), 61.3MiB/s-61.3MiB/s (64.3MB/s-64.3MB/s), io=256MiB (269MB), run=4175-4175msec 00:20:47.559 ----------------------------------------------------- 00:20:47.559 Suppressions used: 00:20:47.559 count bytes template 00:20:47.559 1 5 /usr/src/fio/parse.c 00:20:47.559 1 8 libtcmalloc_minimal.so 00:20:47.559 1 904 libcrypto.so 00:20:47.559 ----------------------------------------------------- 00:20:47.559 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:47.559 19:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:47.819 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:47.819 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:47.819 fio-3.35 00:20:47.819 Starting 2 threads 00:21:14.354 00:21:14.355 first_half: (groupid=0, jobs=1): err= 0: pid=75817: Thu Dec 5 19:39:37 2024 00:21:14.355 read: IOPS=3045, BW=11.9MiB/s (12.5MB/s)(255MiB/21425msec) 00:21:14.355 slat (nsec): min=3121, max=20369, avg=3861.90, stdev=745.12 00:21:14.355 clat (usec): min=603, max=273629, avg=33329.08, stdev=16002.58 00:21:14.355 lat (usec): min=607, max=273634, avg=33332.95, stdev=16002.60 00:21:14.355 clat percentiles (msec): 00:21:14.355 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 29], 00:21:14.355 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:21:14.355 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 38], 95.00th=[ 47], 00:21:14.355 | 99.00th=[ 125], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 232], 00:21:14.355 | 99.99th=[ 266] 00:21:14.355 write: IOPS=3994, BW=15.6MiB/s (16.4MB/s)(256MiB/16407msec); 0 zone resets 00:21:14.355 slat (usec): min=3, max=281, avg= 5.40, stdev= 2.72 00:21:14.355 clat (usec): min=347, max=75504, avg=8643.62, stdev=14701.52 00:21:14.355 lat (usec): min=352, max=75509, avg=8649.02, stdev=14701.52 00:21:14.355 clat percentiles (usec): 00:21:14.355 | 1.00th=[ 644], 5.00th=[ 734], 10.00th=[ 807], 20.00th=[ 1090], 00:21:14.355 | 30.00th=[ 2278], 40.00th=[ 3556], 50.00th=[ 4686], 60.00th=[ 5276], 00:21:14.355 | 70.00th=[ 5866], 80.00th=[ 9372], 90.00th=[12256], 95.00th=[56886], 00:21:14.355 | 99.00th=[66847], 99.50th=[70779], 99.90th=[71828], 99.95th=[72877], 00:21:14.355 | 99.99th=[74974] 00:21:14.355 bw ( KiB/s): min= 9840, max=42824, per=100.00%, avg=30835.59, stdev=11748.60, samples=17 00:21:14.355 iops : min= 2460, max=10706, avg=7708.88, stdev=2937.14, samples=17 00:21:14.355 lat (usec) : 500=0.04%, 750=3.16%, 1000=5.53% 00:21:14.355 lat (msec) : 2=5.66%, 4=7.64%, 10=20.17%, 20=4.80%, 50=47.44% 00:21:14.355 lat (msec) : 100=4.68%, 250=0.86%, 500=0.02% 00:21:14.355 cpu : usr=99.42%, sys=0.12%, ctx=33, majf=0, minf=5595 00:21:14.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:14.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.355 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:14.355 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:14.355 second_half: (groupid=0, jobs=1): err= 0: pid=75818: Thu Dec 5 19:39:37 2024 00:21:14.355 read: IOPS=3021, BW=11.8MiB/s (12.4MB/s)(255MiB/21595msec) 00:21:14.355 slat (usec): min=3, max=338, avg= 3.80, stdev= 1.54 00:21:14.355 clat (usec): min=620, max=278286, avg=32917.82, stdev=17944.35 00:21:14.355 lat (usec): min=632, max=278290, avg=32921.62, stdev=17944.37 00:21:14.355 clat percentiles (msec): 00:21:14.355 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 29], 20.00th=[ 29], 00:21:14.355 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:21:14.355 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 45], 00:21:14.355 | 99.00th=[ 136], 99.50th=[ 150], 99.90th=[ 199], 99.95th=[ 215], 00:21:14.355 | 99.99th=[ 271] 00:21:14.355 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(256MiB/18302msec); 0 zone resets 00:21:14.355 slat (usec): min=3, max=296, avg= 5.53, stdev= 2.61 00:21:14.355 clat (usec): min=372, max=75365, avg=9388.18, stdev=15517.58 00:21:14.355 lat (usec): min=380, max=75370, avg=9393.71, stdev=15517.62 00:21:14.355 clat percentiles (usec): 00:21:14.355 | 1.00th=[ 635], 5.00th=[ 717], 10.00th=[ 783], 20.00th=[ 1004], 00:21:14.355 | 30.00th=[ 1942], 40.00th=[ 3032], 50.00th=[ 3949], 60.00th=[ 5080], 00:21:14.355 | 70.00th=[ 6128], 80.00th=[10159], 90.00th=[28443], 95.00th=[57410], 00:21:14.355 | 99.00th=[67634], 99.50th=[70779], 99.90th=[72877], 99.95th=[73925], 00:21:14.355 | 99.99th=[74974] 00:21:14.355 bw ( KiB/s): min= 920, max=43056, per=91.51%, avg=26214.40, stdev=12367.22, samples=20 00:21:14.355 iops : min= 230, max=10764, avg=6553.60, stdev=3091.81, samples=20 00:21:14.355 lat (usec) : 500=0.02%, 750=3.85%, 1000=6.12% 00:21:14.355 lat (msec) : 2=5.33%, 4=10.25%, 10=16.04%, 20=4.78%, 50=48.09% 00:21:14.355 lat (msec) : 100=4.46%, 250=1.05%, 500=0.01% 00:21:14.355 cpu : usr=99.29%, sys=0.12%, ctx=38, majf=0, minf=5520 00:21:14.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:14.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.355 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:14.355 issued rwts: total=65250,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:14.355 00:21:14.355 Run status group 0 (all jobs): 00:21:14.355 READ: bw=23.6MiB/s (24.8MB/s), 11.8MiB/s-11.9MiB/s (12.4MB/s-12.5MB/s), io=510MiB (534MB), run=21425-21595msec 00:21:14.355 WRITE: bw=28.0MiB/s (29.3MB/s), 14.0MiB/s-15.6MiB/s (14.7MB/s-16.4MB/s), io=512MiB (537MB), run=16407-18302msec 00:21:14.355 ----------------------------------------------------- 00:21:14.355 Suppressions used: 00:21:14.355 count bytes template 00:21:14.355 2 10 /usr/src/fio/parse.c 00:21:14.355 2 192 /usr/src/fio/iolog.c 00:21:14.355 1 8 libtcmalloc_minimal.so 00:21:14.355 1 904 libcrypto.so 00:21:14.355 ----------------------------------------------------- 00:21:14.355 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:14.355 19:39:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:14.355 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:14.355 fio-3.35 00:21:14.355 Starting 1 thread 00:21:29.245 00:21:29.245 test: (groupid=0, jobs=1): err= 0: pid=76109: Thu Dec 5 19:39:54 2024 00:21:29.245 read: IOPS=8006, BW=31.3MiB/s (32.8MB/s)(255MiB/8144msec) 00:21:29.245 slat (nsec): min=3110, max=18131, avg=3670.44, stdev=666.20 00:21:29.245 clat (usec): min=625, max=37654, avg=15980.52, stdev=2099.30 00:21:29.245 lat (usec): min=630, max=37657, avg=15984.19, stdev=2099.40 00:21:29.245 clat percentiles (usec): 00:21:29.245 | 1.00th=[13435], 5.00th=[14746], 10.00th=[14877], 20.00th=[15008], 00:21:29.245 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15533], 60.00th=[15533], 00:21:29.245 | 70.00th=[15664], 80.00th=[15926], 90.00th=[17957], 95.00th=[20579], 00:21:29.245 | 99.00th=[25035], 99.50th=[27657], 99.90th=[30278], 99.95th=[31327], 00:21:29.245 | 99.99th=[37487] 00:21:29.245 write: IOPS=13.8k, BW=53.8MiB/s (56.5MB/s)(256MiB/4755msec); 0 zone resets 00:21:29.245 slat (usec): min=4, max=276, avg= 6.65, stdev= 2.87 00:21:29.245 clat (usec): min=441, max=65596, avg=9244.56, stdev=13767.99 00:21:29.245 lat (usec): min=450, max=65602, avg=9251.21, stdev=13768.04 00:21:29.245 clat percentiles (usec): 00:21:29.245 | 1.00th=[ 627], 5.00th=[ 766], 10.00th=[ 889], 20.00th=[ 1221], 00:21:29.245 | 30.00th=[ 1680], 40.00th=[ 2737], 50.00th=[ 4621], 60.00th=[ 5276], 00:21:29.245 | 70.00th=[ 6325], 80.00th=[ 8029], 90.00th=[32900], 95.00th=[43779], 00:21:29.245 | 99.00th=[56886], 99.50th=[58983], 99.90th=[61604], 99.95th=[62653], 00:21:29.245 | 99.99th=[65274] 00:21:29.245 bw ( KiB/s): min=19648, max=93200, per=95.10%, avg=52428.80, stdev=23807.22, samples=10 00:21:29.245 iops : min= 4912, max=23300, avg=13107.20, stdev=5951.81, samples=10 00:21:29.245 lat (usec) : 500=0.01%, 750=2.18%, 1000=4.93% 00:21:29.246 lat (msec) : 2=10.55%, 4=4.56%, 10=19.53%, 20=47.28%, 50=9.32% 00:21:29.246 lat (msec) : 100=1.63% 00:21:29.246 cpu : usr=99.14%, sys=0.19%, ctx=17, majf=0, minf=5565 00:21:29.246 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:29.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:29.246 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:29.246 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:29.246 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:29.246 00:21:29.246 Run status group 0 (all jobs): 00:21:29.246 READ: bw=31.3MiB/s (32.8MB/s), 31.3MiB/s-31.3MiB/s (32.8MB/s-32.8MB/s), io=255MiB (267MB), run=8144-8144msec 00:21:29.246 WRITE: bw=53.8MiB/s (56.5MB/s), 53.8MiB/s-53.8MiB/s (56.5MB/s-56.5MB/s), io=256MiB (268MB), run=4755-4755msec 00:21:29.246 ----------------------------------------------------- 00:21:29.246 Suppressions used: 00:21:29.246 count bytes template 00:21:29.246 1 5 /usr/src/fio/parse.c 00:21:29.246 2 192 /usr/src/fio/iolog.c 00:21:29.246 1 8 libtcmalloc_minimal.so 00:21:29.246 1 904 libcrypto.so 00:21:29.246 ----------------------------------------------------- 00:21:29.246 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:29.246 Remove shared memory files 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57311 /dev/shm/spdk_tgt_trace.pid74450 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:21:29.246 ************************************ 00:21:29.246 END TEST ftl_fio_basic 00:21:29.246 ************************************ 00:21:29.246 00:21:29.246 real 0m59.404s 00:21:29.246 user 2m8.645s 00:21:29.246 sys 0m2.531s 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:29.246 19:39:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:29.246 19:39:55 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:29.246 19:39:55 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:29.246 19:39:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:29.246 19:39:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:29.246 ************************************ 00:21:29.246 START TEST ftl_bdevperf 00:21:29.246 ************************************ 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:29.246 * Looking for test storage... 00:21:29.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:29.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.246 --rc genhtml_branch_coverage=1 00:21:29.246 --rc genhtml_function_coverage=1 00:21:29.246 --rc genhtml_legend=1 00:21:29.246 --rc geninfo_all_blocks=1 00:21:29.246 --rc geninfo_unexecuted_blocks=1 00:21:29.246 00:21:29.246 ' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:29.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.246 --rc genhtml_branch_coverage=1 00:21:29.246 --rc genhtml_function_coverage=1 00:21:29.246 --rc genhtml_legend=1 00:21:29.246 --rc geninfo_all_blocks=1 00:21:29.246 --rc geninfo_unexecuted_blocks=1 00:21:29.246 00:21:29.246 ' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:29.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.246 --rc genhtml_branch_coverage=1 00:21:29.246 --rc genhtml_function_coverage=1 00:21:29.246 --rc genhtml_legend=1 00:21:29.246 --rc geninfo_all_blocks=1 00:21:29.246 --rc geninfo_unexecuted_blocks=1 00:21:29.246 00:21:29.246 ' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:29.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.246 --rc genhtml_branch_coverage=1 00:21:29.246 --rc genhtml_function_coverage=1 00:21:29.246 --rc genhtml_legend=1 00:21:29.246 --rc geninfo_all_blocks=1 00:21:29.246 --rc geninfo_unexecuted_blocks=1 00:21:29.246 00:21:29.246 ' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76336 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76336 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76336 ']' 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.246 19:39:55 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:29.246 [2024-12-05 19:39:55.768014] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:21:29.246 [2024-12-05 19:39:55.768282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76336 ] 00:21:29.246 [2024-12-05 19:39:55.926706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.246 [2024-12-05 19:39:56.005378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.503 19:39:56 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.503 19:39:56 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:21:29.503 19:39:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:29.503 19:39:56 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:21:29.503 19:39:56 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:29.503 19:39:56 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:21:29.503 19:39:56 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:21:29.503 19:39:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:29.771 19:39:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:29.771 19:39:56 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:21:29.771 19:39:56 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:29.771 19:39:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:29.771 19:39:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:29.771 19:39:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:29.771 19:39:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:29.771 19:39:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:30.030 19:39:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:30.030 { 00:21:30.030 "name": "nvme0n1", 00:21:30.030 "aliases": [ 00:21:30.030 "6a70d205-40f4-4a06-8165-d989e104e630" 00:21:30.030 ], 00:21:30.030 "product_name": "NVMe disk", 00:21:30.030 "block_size": 4096, 00:21:30.030 "num_blocks": 1310720, 00:21:30.030 "uuid": "6a70d205-40f4-4a06-8165-d989e104e630", 00:21:30.030 "numa_id": -1, 00:21:30.030 "assigned_rate_limits": { 00:21:30.030 "rw_ios_per_sec": 0, 00:21:30.030 "rw_mbytes_per_sec": 0, 00:21:30.030 "r_mbytes_per_sec": 0, 00:21:30.030 "w_mbytes_per_sec": 0 00:21:30.030 }, 00:21:30.030 "claimed": true, 00:21:30.030 "claim_type": "read_many_write_one", 00:21:30.030 "zoned": false, 00:21:30.030 "supported_io_types": { 00:21:30.030 "read": true, 00:21:30.030 "write": true, 00:21:30.030 "unmap": true, 00:21:30.030 "flush": true, 00:21:30.030 "reset": true, 00:21:30.030 "nvme_admin": true, 00:21:30.030 "nvme_io": true, 00:21:30.030 "nvme_io_md": false, 00:21:30.030 "write_zeroes": true, 00:21:30.030 "zcopy": false, 00:21:30.030 "get_zone_info": false, 00:21:30.030 "zone_management": false, 00:21:30.030 "zone_append": false, 00:21:30.030 "compare": true, 00:21:30.030 "compare_and_write": false, 00:21:30.030 "abort": true, 00:21:30.030 "seek_hole": false, 00:21:30.030 "seek_data": false, 00:21:30.030 "copy": true, 00:21:30.030 "nvme_iov_md": false 00:21:30.030 }, 00:21:30.030 "driver_specific": { 00:21:30.030 "nvme": [ 00:21:30.030 { 00:21:30.030 "pci_address": "0000:00:11.0", 00:21:30.030 "trid": { 00:21:30.030 "trtype": "PCIe", 00:21:30.030 "traddr": "0000:00:11.0" 00:21:30.030 }, 00:21:30.030 "ctrlr_data": { 00:21:30.030 "cntlid": 0, 00:21:30.030 "vendor_id": "0x1b36", 00:21:30.030 "model_number": "QEMU NVMe Ctrl", 00:21:30.030 "serial_number": "12341", 00:21:30.030 "firmware_revision": "8.0.0", 00:21:30.030 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:30.030 "oacs": { 00:21:30.030 "security": 0, 00:21:30.030 "format": 1, 00:21:30.030 "firmware": 0, 00:21:30.030 "ns_manage": 1 00:21:30.030 }, 00:21:30.030 "multi_ctrlr": false, 00:21:30.030 "ana_reporting": false 00:21:30.030 }, 00:21:30.030 "vs": { 00:21:30.030 "nvme_version": "1.4" 00:21:30.030 }, 00:21:30.030 "ns_data": { 00:21:30.030 "id": 1, 00:21:30.030 "can_share": false 00:21:30.030 } 00:21:30.030 } 00:21:30.030 ], 00:21:30.030 "mp_policy": "active_passive" 00:21:30.030 } 00:21:30.030 } 00:21:30.030 ]' 00:21:30.030 19:39:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:30.030 19:39:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:30.030 19:39:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:30.030 19:39:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:30.030 19:39:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:30.030 19:39:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:21:30.030 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:21:30.030 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:30.030 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:21:30.030 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:30.030 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:30.288 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=75649d47-23fc-4eb2-9bf2-9bcda1f63079 00:21:30.288 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:21:30.288 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 75649d47-23fc-4eb2-9bf2-9bcda1f63079 00:21:30.547 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:30.547 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=2c3972de-33b9-478e-9adc-e832bd5687bc 00:21:30.547 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2c3972de-33b9-478e-9adc-e832bd5687bc 00:21:30.867 19:39:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=e1d0bfff-5eab-44f4-b4db-e0b962769ec6 00:21:30.867 19:39:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e1d0bfff-5eab-44f4-b4db-e0b962769ec6 00:21:30.867 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:21:30.867 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:30.867 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=e1d0bfff-5eab-44f4-b4db-e0b962769ec6 00:21:30.867 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:21:30.867 19:39:57 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size e1d0bfff-5eab-44f4-b4db-e0b962769ec6 00:21:30.867 19:39:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e1d0bfff-5eab-44f4-b4db-e0b962769ec6 00:21:30.867 19:39:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:30.867 19:39:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:30.867 19:39:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:30.867 19:39:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e1d0bfff-5eab-44f4-b4db-e0b962769ec6 00:21:31.125 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:31.125 { 00:21:31.125 "name": "e1d0bfff-5eab-44f4-b4db-e0b962769ec6", 00:21:31.125 "aliases": [ 00:21:31.125 "lvs/nvme0n1p0" 00:21:31.125 ], 00:21:31.125 "product_name": "Logical Volume", 00:21:31.125 "block_size": 4096, 00:21:31.125 "num_blocks": 26476544, 00:21:31.125 "uuid": "e1d0bfff-5eab-44f4-b4db-e0b962769ec6", 00:21:31.125 "assigned_rate_limits": { 00:21:31.125 "rw_ios_per_sec": 0, 00:21:31.125 "rw_mbytes_per_sec": 0, 00:21:31.125 "r_mbytes_per_sec": 0, 00:21:31.125 "w_mbytes_per_sec": 0 00:21:31.125 }, 00:21:31.125 "claimed": false, 00:21:31.125 "zoned": false, 00:21:31.125 "supported_io_types": { 00:21:31.125 "read": true, 00:21:31.125 "write": true, 00:21:31.125 "unmap": true, 00:21:31.125 "flush": false, 00:21:31.125 "reset": true, 00:21:31.125 "nvme_admin": false, 00:21:31.125 "nvme_io": false, 00:21:31.125 "nvme_io_md": false, 00:21:31.125 "write_zeroes": true, 00:21:31.125 "zcopy": false, 00:21:31.125 "get_zone_info": false, 00:21:31.125 "zone_management": false, 00:21:31.125 "zone_append": false, 00:21:31.125 "compare": false, 00:21:31.125 "compare_and_write": false, 00:21:31.125 "abort": false, 00:21:31.125 "seek_hole": true, 00:21:31.125 "seek_data": true, 00:21:31.125 "copy": false, 00:21:31.125 "nvme_iov_md": false 00:21:31.125 }, 00:21:31.125 "driver_specific": { 00:21:31.125 "lvol": { 00:21:31.125 "lvol_store_uuid": "2c3972de-33b9-478e-9adc-e832bd5687bc", 00:21:31.125 "base_bdev": "nvme0n1", 00:21:31.125 "thin_provision": true, 00:21:31.125 "num_allocated_clusters": 0, 00:21:31.125 "snapshot": false, 00:21:31.125 "clone": false, 00:21:31.125 "esnap_clone": false 00:21:31.125 } 00:21:31.125 } 00:21:31.125 } 00:21:31.125 ]' 00:21:31.125 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:31.125 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:31.125 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:31.125 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:31.125 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:31.125 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:31.125 19:39:58 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:21:31.125 19:39:58 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:21:31.125 19:39:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:31.387 19:39:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:31.387 19:39:58 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:31.387 19:39:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size e1d0bfff-5eab-44f4-b4db-e0b962769ec6 00:21:31.387 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e1d0bfff-5eab-44f4-b4db-e0b962769ec6 00:21:31.387 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:31.387 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:31.387 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:31.387 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e1d0bfff-5eab-44f4-b4db-e0b962769ec6 00:21:31.648 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:31.648 { 00:21:31.648 "name": "e1d0bfff-5eab-44f4-b4db-e0b962769ec6", 00:21:31.648 "aliases": [ 00:21:31.648 "lvs/nvme0n1p0" 00:21:31.648 ], 00:21:31.648 "product_name": "Logical Volume", 00:21:31.648 "block_size": 4096, 00:21:31.648 "num_blocks": 26476544, 00:21:31.648 "uuid": "e1d0bfff-5eab-44f4-b4db-e0b962769ec6", 00:21:31.648 "assigned_rate_limits": { 00:21:31.648 "rw_ios_per_sec": 0, 00:21:31.648 "rw_mbytes_per_sec": 0, 00:21:31.648 "r_mbytes_per_sec": 0, 00:21:31.648 "w_mbytes_per_sec": 0 00:21:31.648 }, 00:21:31.648 "claimed": false, 00:21:31.648 "zoned": false, 00:21:31.648 "supported_io_types": { 00:21:31.648 "read": true, 00:21:31.648 "write": true, 00:21:31.648 "unmap": true, 00:21:31.648 "flush": false, 00:21:31.648 "reset": true, 00:21:31.648 "nvme_admin": false, 00:21:31.648 "nvme_io": false, 00:21:31.648 "nvme_io_md": false, 00:21:31.648 "write_zeroes": true, 00:21:31.648 "zcopy": false, 00:21:31.648 "get_zone_info": false, 00:21:31.648 "zone_management": false, 00:21:31.648 "zone_append": false, 00:21:31.648 "compare": false, 00:21:31.648 "compare_and_write": false, 00:21:31.648 "abort": false, 00:21:31.648 "seek_hole": true, 00:21:31.648 "seek_data": true, 00:21:31.648 "copy": false, 00:21:31.648 "nvme_iov_md": false 00:21:31.648 }, 00:21:31.648 "driver_specific": { 00:21:31.648 "lvol": { 00:21:31.648 "lvol_store_uuid": "2c3972de-33b9-478e-9adc-e832bd5687bc", 00:21:31.648 "base_bdev": "nvme0n1", 00:21:31.648 "thin_provision": true, 00:21:31.648 "num_allocated_clusters": 0, 00:21:31.648 "snapshot": false, 00:21:31.648 "clone": false, 00:21:31.648 "esnap_clone": false 00:21:31.648 } 00:21:31.648 } 00:21:31.648 } 00:21:31.648 ]' 00:21:31.648 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:31.648 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:31.648 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:31.648 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:31.648 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:31.648 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:31.648 19:39:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:21:31.648 19:39:58 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:31.911 19:39:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:21:31.911 19:39:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size e1d0bfff-5eab-44f4-b4db-e0b962769ec6 00:21:31.911 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e1d0bfff-5eab-44f4-b4db-e0b962769ec6 00:21:31.911 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:31.911 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:31.911 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:31.911 19:39:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e1d0bfff-5eab-44f4-b4db-e0b962769ec6 00:21:32.171 19:39:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:32.171 { 00:21:32.171 "name": "e1d0bfff-5eab-44f4-b4db-e0b962769ec6", 00:21:32.171 "aliases": [ 00:21:32.171 "lvs/nvme0n1p0" 00:21:32.171 ], 00:21:32.171 "product_name": "Logical Volume", 00:21:32.171 "block_size": 4096, 00:21:32.171 "num_blocks": 26476544, 00:21:32.171 "uuid": "e1d0bfff-5eab-44f4-b4db-e0b962769ec6", 00:21:32.171 "assigned_rate_limits": { 00:21:32.171 "rw_ios_per_sec": 0, 00:21:32.171 "rw_mbytes_per_sec": 0, 00:21:32.171 "r_mbytes_per_sec": 0, 00:21:32.171 "w_mbytes_per_sec": 0 00:21:32.171 }, 00:21:32.171 "claimed": false, 00:21:32.171 "zoned": false, 00:21:32.171 "supported_io_types": { 00:21:32.171 "read": true, 00:21:32.171 "write": true, 00:21:32.171 "unmap": true, 00:21:32.171 "flush": false, 00:21:32.171 "reset": true, 00:21:32.171 "nvme_admin": false, 00:21:32.171 "nvme_io": false, 00:21:32.171 "nvme_io_md": false, 00:21:32.171 "write_zeroes": true, 00:21:32.171 "zcopy": false, 00:21:32.171 "get_zone_info": false, 00:21:32.171 "zone_management": false, 00:21:32.171 "zone_append": false, 00:21:32.171 "compare": false, 00:21:32.171 "compare_and_write": false, 00:21:32.171 "abort": false, 00:21:32.171 "seek_hole": true, 00:21:32.171 "seek_data": true, 00:21:32.171 "copy": false, 00:21:32.171 "nvme_iov_md": false 00:21:32.171 }, 00:21:32.171 "driver_specific": { 00:21:32.171 "lvol": { 00:21:32.171 "lvol_store_uuid": "2c3972de-33b9-478e-9adc-e832bd5687bc", 00:21:32.171 "base_bdev": "nvme0n1", 00:21:32.171 "thin_provision": true, 00:21:32.171 "num_allocated_clusters": 0, 00:21:32.171 "snapshot": false, 00:21:32.171 "clone": false, 00:21:32.171 "esnap_clone": false 00:21:32.171 } 00:21:32.171 } 00:21:32.171 } 00:21:32.171 ]' 00:21:32.171 19:39:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:32.171 19:39:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:32.171 19:39:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:32.171 19:39:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:32.171 19:39:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:32.171 19:39:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:32.171 19:39:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:21:32.171 19:39:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e1d0bfff-5eab-44f4-b4db-e0b962769ec6 -c nvc0n1p0 --l2p_dram_limit 20 00:21:32.433 [2024-12-05 19:39:59.455484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.433 [2024-12-05 19:39:59.455536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:32.433 [2024-12-05 19:39:59.455551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:32.433 [2024-12-05 19:39:59.455562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.433 [2024-12-05 19:39:59.455622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.433 [2024-12-05 19:39:59.455635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:32.433 [2024-12-05 19:39:59.455644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:32.433 [2024-12-05 19:39:59.455654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.433 [2024-12-05 19:39:59.455683] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:32.433 [2024-12-05 19:39:59.456440] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:32.433 [2024-12-05 19:39:59.456457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.433 [2024-12-05 19:39:59.456467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:32.433 [2024-12-05 19:39:59.456477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:21:32.433 [2024-12-05 19:39:59.456487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.433 [2024-12-05 19:39:59.456547] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID be0b7266-c707-4bb5-b6cb-c41a23cdba59 00:21:32.433 [2024-12-05 19:39:59.457621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.433 [2024-12-05 19:39:59.457651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:32.433 [2024-12-05 19:39:59.457665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:32.433 [2024-12-05 19:39:59.457690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.433 [2024-12-05 19:39:59.462737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.433 [2024-12-05 19:39:59.462767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:32.433 [2024-12-05 19:39:59.462779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.010 ms 00:21:32.433 [2024-12-05 19:39:59.462789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.433 [2024-12-05 19:39:59.462869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.433 [2024-12-05 19:39:59.462878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:32.433 [2024-12-05 19:39:59.462890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:32.433 [2024-12-05 19:39:59.462898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.433 [2024-12-05 19:39:59.462941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.433 [2024-12-05 19:39:59.462951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:32.433 [2024-12-05 19:39:59.462960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:32.433 [2024-12-05 19:39:59.462967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.433 [2024-12-05 19:39:59.462989] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:32.433 [2024-12-05 19:39:59.466559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.433 [2024-12-05 19:39:59.466589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:32.433 [2024-12-05 19:39:59.466598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.578 ms 00:21:32.433 [2024-12-05 19:39:59.466609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.433 [2024-12-05 19:39:59.466639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.433 [2024-12-05 19:39:59.466649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:32.433 [2024-12-05 19:39:59.466657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:32.433 [2024-12-05 19:39:59.466666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.433 [2024-12-05 19:39:59.466709] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:32.433 [2024-12-05 19:39:59.466857] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:32.433 [2024-12-05 19:39:59.466869] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:32.433 [2024-12-05 19:39:59.466881] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:32.433 [2024-12-05 19:39:59.466891] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:32.433 [2024-12-05 19:39:59.466902] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:32.433 [2024-12-05 19:39:59.466910] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:32.433 [2024-12-05 19:39:59.466919] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:32.433 [2024-12-05 19:39:59.466927] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:32.433 [2024-12-05 19:39:59.466936] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:32.433 [2024-12-05 19:39:59.466945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.433 [2024-12-05 19:39:59.466953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:32.433 [2024-12-05 19:39:59.466961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:21:32.433 [2024-12-05 19:39:59.466970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.433 [2024-12-05 19:39:59.467052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.433 [2024-12-05 19:39:59.467062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:32.433 [2024-12-05 19:39:59.467070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:32.433 [2024-12-05 19:39:59.467080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.433 [2024-12-05 19:39:59.467181] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:32.433 [2024-12-05 19:39:59.467195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:32.433 [2024-12-05 19:39:59.467203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:32.433 [2024-12-05 19:39:59.467212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.433 [2024-12-05 19:39:59.467219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:32.433 [2024-12-05 19:39:59.467227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:32.433 [2024-12-05 19:39:59.467234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:32.433 [2024-12-05 19:39:59.467242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:32.433 [2024-12-05 19:39:59.467248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:32.433 [2024-12-05 19:39:59.467258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:32.433 [2024-12-05 19:39:59.467264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:32.433 [2024-12-05 19:39:59.467278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:32.433 [2024-12-05 19:39:59.467285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:32.433 [2024-12-05 19:39:59.467293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:32.433 [2024-12-05 19:39:59.467299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:32.433 [2024-12-05 19:39:59.467308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.433 [2024-12-05 19:39:59.467315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:32.433 [2024-12-05 19:39:59.467323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:32.433 [2024-12-05 19:39:59.467329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.433 [2024-12-05 19:39:59.467338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:32.433 [2024-12-05 19:39:59.467344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:32.433 [2024-12-05 19:39:59.467351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.433 [2024-12-05 19:39:59.467358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:32.433 [2024-12-05 19:39:59.467366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:32.433 [2024-12-05 19:39:59.467374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.433 [2024-12-05 19:39:59.467382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:32.433 [2024-12-05 19:39:59.467389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:32.433 [2024-12-05 19:39:59.467397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.433 [2024-12-05 19:39:59.467403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:32.433 [2024-12-05 19:39:59.467411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:32.433 [2024-12-05 19:39:59.467418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.433 [2024-12-05 19:39:59.467427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:32.433 [2024-12-05 19:39:59.467434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:32.433 [2024-12-05 19:39:59.467443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:32.433 [2024-12-05 19:39:59.467449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:32.433 [2024-12-05 19:39:59.467457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:32.433 [2024-12-05 19:39:59.467463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:32.433 [2024-12-05 19:39:59.467471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:32.434 [2024-12-05 19:39:59.467478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:32.434 [2024-12-05 19:39:59.467485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.434 [2024-12-05 19:39:59.467492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:32.434 [2024-12-05 19:39:59.467500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:32.434 [2024-12-05 19:39:59.467506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.434 [2024-12-05 19:39:59.467513] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:32.434 [2024-12-05 19:39:59.467520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:32.434 [2024-12-05 19:39:59.467529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:32.434 [2024-12-05 19:39:59.467536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.434 [2024-12-05 19:39:59.467546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:32.434 [2024-12-05 19:39:59.467552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:32.434 [2024-12-05 19:39:59.467560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:32.434 [2024-12-05 19:39:59.467567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:32.434 [2024-12-05 19:39:59.467575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:32.434 [2024-12-05 19:39:59.467582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:32.434 [2024-12-05 19:39:59.467591] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:32.434 [2024-12-05 19:39:59.467602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:32.434 [2024-12-05 19:39:59.467611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:32.434 [2024-12-05 19:39:59.467626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:32.434 [2024-12-05 19:39:59.467635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:32.434 [2024-12-05 19:39:59.467642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:32.434 [2024-12-05 19:39:59.467651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:32.434 [2024-12-05 19:39:59.467658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:32.434 [2024-12-05 19:39:59.467678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:32.434 [2024-12-05 19:39:59.467686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:32.434 [2024-12-05 19:39:59.467696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:32.434 [2024-12-05 19:39:59.467703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:32.434 [2024-12-05 19:39:59.467711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:32.434 [2024-12-05 19:39:59.467719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:32.434 [2024-12-05 19:39:59.467727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:32.434 [2024-12-05 19:39:59.467734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:32.434 [2024-12-05 19:39:59.467743] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:32.434 [2024-12-05 19:39:59.467750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:32.434 [2024-12-05 19:39:59.467762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:32.434 [2024-12-05 19:39:59.467769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:32.434 [2024-12-05 19:39:59.467778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:32.434 [2024-12-05 19:39:59.467785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:32.434 [2024-12-05 19:39:59.467794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.434 [2024-12-05 19:39:59.467801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:32.434 [2024-12-05 19:39:59.467810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:21:32.434 [2024-12-05 19:39:59.467817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.434 [2024-12-05 19:39:59.467850] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:32.434 [2024-12-05 19:39:59.467859] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:35.732 [2024-12-05 19:40:02.740296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.732 [2024-12-05 19:40:02.740359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:35.733 [2024-12-05 19:40:02.740376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3272.431 ms 00:21:35.733 [2024-12-05 19:40:02.740386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.767342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.767389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.733 [2024-12-05 19:40:02.767404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.740 ms 00:21:35.733 [2024-12-05 19:40:02.767412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.767548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.767564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:35.733 [2024-12-05 19:40:02.767577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:35.733 [2024-12-05 19:40:02.767584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.810007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.810053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.733 [2024-12-05 19:40:02.810067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.364 ms 00:21:35.733 [2024-12-05 19:40:02.810075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.810118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.810128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.733 [2024-12-05 19:40:02.810138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:35.733 [2024-12-05 19:40:02.810148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.810537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.810555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.733 [2024-12-05 19:40:02.810566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:21:35.733 [2024-12-05 19:40:02.810574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.810704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.810714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.733 [2024-12-05 19:40:02.810726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:21:35.733 [2024-12-05 19:40:02.810733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.824231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.824264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.733 [2024-12-05 19:40:02.824276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.479 ms 00:21:35.733 [2024-12-05 19:40:02.824290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.835897] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:35.733 [2024-12-05 19:40:02.840959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.840991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:35.733 [2024-12-05 19:40:02.841002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.603 ms 00:21:35.733 [2024-12-05 19:40:02.841012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.913207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.913262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:35.733 [2024-12-05 19:40:02.913276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.168 ms 00:21:35.733 [2024-12-05 19:40:02.913287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.913467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.913482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:35.733 [2024-12-05 19:40:02.913491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:21:35.733 [2024-12-05 19:40:02.913503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.937484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.937525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:35.733 [2024-12-05 19:40:02.937536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.940 ms 00:21:35.733 [2024-12-05 19:40:02.937546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.960778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.960822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:35.733 [2024-12-05 19:40:02.960834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.198 ms 00:21:35.733 [2024-12-05 19:40:02.960844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.733 [2024-12-05 19:40:02.961405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.733 [2024-12-05 19:40:02.961425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:35.733 [2024-12-05 19:40:02.961434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:21:35.733 [2024-12-05 19:40:02.961443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.995 [2024-12-05 19:40:03.033464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.995 [2024-12-05 19:40:03.033515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:35.995 [2024-12-05 19:40:03.033527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.991 ms 00:21:35.995 [2024-12-05 19:40:03.033537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.995 [2024-12-05 19:40:03.057985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.995 [2024-12-05 19:40:03.058027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:35.995 [2024-12-05 19:40:03.058043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.381 ms 00:21:35.995 [2024-12-05 19:40:03.058054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.995 [2024-12-05 19:40:03.082109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.995 [2024-12-05 19:40:03.082147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:35.995 [2024-12-05 19:40:03.082158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.020 ms 00:21:35.995 [2024-12-05 19:40:03.082166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.995 [2024-12-05 19:40:03.107147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.995 [2024-12-05 19:40:03.107188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:35.995 [2024-12-05 19:40:03.107200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.947 ms 00:21:35.995 [2024-12-05 19:40:03.107210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.995 [2024-12-05 19:40:03.107249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.995 [2024-12-05 19:40:03.107263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:35.995 [2024-12-05 19:40:03.107272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:35.995 [2024-12-05 19:40:03.107281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.995 [2024-12-05 19:40:03.107355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.995 [2024-12-05 19:40:03.107367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:35.995 [2024-12-05 19:40:03.107375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:35.995 [2024-12-05 19:40:03.107384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.995 [2024-12-05 19:40:03.108353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3652.464 ms, result 0 00:21:35.995 { 00:21:35.995 "name": "ftl0", 00:21:35.995 "uuid": "be0b7266-c707-4bb5-b6cb-c41a23cdba59" 00:21:35.995 } 00:21:35.995 19:40:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:35.995 19:40:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:21:35.996 19:40:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:21:36.257 19:40:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:36.257 [2024-12-05 19:40:03.416555] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:36.257 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:36.257 Zero copy mechanism will not be used. 00:21:36.257 Running I/O for 4 seconds... 00:21:38.587 1177.00 IOPS, 78.16 MiB/s [2024-12-05T19:40:06.785Z] 973.00 IOPS, 64.61 MiB/s [2024-12-05T19:40:07.745Z] 947.67 IOPS, 62.93 MiB/s [2024-12-05T19:40:07.745Z] 870.50 IOPS, 57.81 MiB/s 00:21:40.490 Latency(us) 00:21:40.490 [2024-12-05T19:40:07.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.490 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:40.490 ftl0 : 4.00 870.13 57.78 0.00 0.00 1209.00 186.68 23391.31 00:21:40.490 [2024-12-05T19:40:07.745Z] =================================================================================================================== 00:21:40.490 [2024-12-05T19:40:07.745Z] Total : 870.13 57.78 0.00 0.00 1209.00 186.68 23391.31 00:21:40.490 [2024-12-05 19:40:07.427536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:40.490 { 00:21:40.490 "results": [ 00:21:40.490 { 00:21:40.490 "job": "ftl0", 00:21:40.490 "core_mask": "0x1", 00:21:40.490 "workload": "randwrite", 00:21:40.490 "status": "finished", 00:21:40.490 "queue_depth": 1, 00:21:40.490 "io_size": 69632, 00:21:40.490 "runtime": 4.002852, 00:21:40.490 "iops": 870.1295975969134, 00:21:40.490 "mibps": 57.78204359042003, 00:21:40.490 "io_failed": 0, 00:21:40.490 "io_timeout": 0, 00:21:40.490 "avg_latency_us": 1209.0021917445174, 00:21:40.490 "min_latency_us": 186.68307692307692, 00:21:40.490 "max_latency_us": 23391.310769230768 00:21:40.490 } 00:21:40.490 ], 00:21:40.490 "core_count": 1 00:21:40.490 } 00:21:40.490 19:40:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:40.490 [2024-12-05 19:40:07.542728] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:40.490 Running I/O for 4 seconds... 00:21:42.376 8098.00 IOPS, 31.63 MiB/s [2024-12-05T19:40:10.573Z] 6671.50 IOPS, 26.06 MiB/s [2024-12-05T19:40:11.957Z] 6080.33 IOPS, 23.75 MiB/s [2024-12-05T19:40:11.957Z] 5816.25 IOPS, 22.72 MiB/s 00:21:44.702 Latency(us) 00:21:44.702 [2024-12-05T19:40:11.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.702 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:44.702 ftl0 : 4.03 5797.24 22.65 0.00 0.00 21999.91 266.24 48395.82 00:21:44.702 [2024-12-05T19:40:11.957Z] =================================================================================================================== 00:21:44.702 [2024-12-05T19:40:11.957Z] Total : 5797.24 22.65 0.00 0.00 21999.91 0.00 48395.82 00:21:44.702 [2024-12-05 19:40:11.586752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:44.702 { 00:21:44.702 "results": [ 00:21:44.702 { 00:21:44.702 "job": "ftl0", 00:21:44.702 "core_mask": "0x1", 00:21:44.702 "workload": "randwrite", 00:21:44.702 "status": "finished", 00:21:44.702 "queue_depth": 128, 00:21:44.702 "io_size": 4096, 00:21:44.702 "runtime": 4.033989, 00:21:44.702 "iops": 5797.239407445088, 00:21:44.702 "mibps": 22.645466435332374, 00:21:44.702 "io_failed": 0, 00:21:44.702 "io_timeout": 0, 00:21:44.702 "avg_latency_us": 21999.91337355025, 00:21:44.702 "min_latency_us": 266.24, 00:21:44.702 "max_latency_us": 48395.81538461539 00:21:44.702 } 00:21:44.702 ], 00:21:44.702 "core_count": 1 00:21:44.702 } 00:21:44.702 19:40:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:44.702 [2024-12-05 19:40:11.686928] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:44.702 Running I/O for 4 seconds... 00:21:46.589 4974.00 IOPS, 19.43 MiB/s [2024-12-05T19:40:14.788Z] 5014.00 IOPS, 19.59 MiB/s [2024-12-05T19:40:15.730Z] 4953.00 IOPS, 19.35 MiB/s [2024-12-05T19:40:15.730Z] 4966.50 IOPS, 19.40 MiB/s 00:21:48.475 Latency(us) 00:21:48.475 [2024-12-05T19:40:15.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.475 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:48.475 Verification LBA range: start 0x0 length 0x1400000 00:21:48.475 ftl0 : 4.02 4976.32 19.44 0.00 0.00 25638.90 330.83 37506.76 00:21:48.475 [2024-12-05T19:40:15.730Z] =================================================================================================================== 00:21:48.475 [2024-12-05T19:40:15.730Z] Total : 4976.32 19.44 0.00 0.00 25638.90 0.00 37506.76 00:21:48.475 [2024-12-05 19:40:15.717155] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:21:48.475 "results": [ 00:21:48.475 { 00:21:48.475 "job": "ftl0", 00:21:48.475 "core_mask": "0x1", 00:21:48.475 "workload": "verify", 00:21:48.475 "status": "finished", 00:21:48.475 "verify_range": { 00:21:48.475 "start": 0, 00:21:48.475 "length": 20971520 00:21:48.475 }, 00:21:48.475 "queue_depth": 128, 00:21:48.475 "io_size": 4096, 00:21:48.475 "runtime": 4.015415, 00:21:48.475 "iops": 4976.322497176506, 00:21:48.475 "mibps": 19.438759754595726, 00:21:48.475 "io_failed": 0, 00:21:48.475 "io_timeout": 0, 00:21:48.475 "avg_latency_us": 25638.89998475551, 00:21:48.475 "min_latency_us": 330.83076923076925, 00:21:48.475 "max_latency_us": 37506.75692307692 00:21:48.475 } 00:21:48.475 ], 00:21:48.475 "core_count": 1 00:21:48.475 } 00:21:48.475 l0 00:21:48.736 19:40:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:48.736 [2024-12-05 19:40:15.914365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.736 [2024-12-05 19:40:15.914412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:48.736 [2024-12-05 19:40:15.914426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:48.736 [2024-12-05 19:40:15.914436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.736 [2024-12-05 19:40:15.914457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:48.736 [2024-12-05 19:40:15.917096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.736 [2024-12-05 19:40:15.917123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:48.736 [2024-12-05 19:40:15.917136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.621 ms 00:21:48.736 [2024-12-05 19:40:15.917144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.736 [2024-12-05 19:40:15.920076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.736 [2024-12-05 19:40:15.920107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:48.736 [2024-12-05 19:40:15.920124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.907 ms 00:21:48.736 [2024-12-05 19:40:15.920131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.996 [2024-12-05 19:40:16.115145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.996 [2024-12-05 19:40:16.115194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:48.996 [2024-12-05 19:40:16.115213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 194.991 ms 00:21:48.996 [2024-12-05 19:40:16.115222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.996 [2024-12-05 19:40:16.121421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.996 [2024-12-05 19:40:16.121447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:48.996 [2024-12-05 19:40:16.121461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.163 ms 00:21:48.996 [2024-12-05 19:40:16.121472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.996 [2024-12-05 19:40:16.146499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.996 [2024-12-05 19:40:16.146529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:48.996 [2024-12-05 19:40:16.146542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.977 ms 00:21:48.996 [2024-12-05 19:40:16.146550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.996 [2024-12-05 19:40:16.161990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.996 [2024-12-05 19:40:16.162019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:48.996 [2024-12-05 19:40:16.162033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.403 ms 00:21:48.996 [2024-12-05 19:40:16.162042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.996 [2024-12-05 19:40:16.162179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.996 [2024-12-05 19:40:16.162190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:48.996 [2024-12-05 19:40:16.162202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:21:48.996 [2024-12-05 19:40:16.162209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.996 [2024-12-05 19:40:16.185169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.996 [2024-12-05 19:40:16.185195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:48.996 [2024-12-05 19:40:16.185206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.943 ms 00:21:48.996 [2024-12-05 19:40:16.185213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.996 [2024-12-05 19:40:16.211081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.996 [2024-12-05 19:40:16.211113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:48.996 [2024-12-05 19:40:16.211127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.830 ms 00:21:48.996 [2024-12-05 19:40:16.211135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.996 [2024-12-05 19:40:16.234308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.996 [2024-12-05 19:40:16.234337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:48.996 [2024-12-05 19:40:16.234350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.130 ms 00:21:48.996 [2024-12-05 19:40:16.234358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.257 [2024-12-05 19:40:16.257182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.257 [2024-12-05 19:40:16.257209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:49.257 [2024-12-05 19:40:16.257224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.753 ms 00:21:49.257 [2024-12-05 19:40:16.257233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.257 [2024-12-05 19:40:16.257267] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:49.257 [2024-12-05 19:40:16.257281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:49.257 [2024-12-05 19:40:16.257790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.257992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:49.258 [2024-12-05 19:40:16.258141] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:49.258 [2024-12-05 19:40:16.258157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: be0b7266-c707-4bb5-b6cb-c41a23cdba59 00:21:49.258 [2024-12-05 19:40:16.258166] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:49.258 [2024-12-05 19:40:16.258175] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:49.258 [2024-12-05 19:40:16.258182] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:49.258 [2024-12-05 19:40:16.258191] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:49.258 [2024-12-05 19:40:16.258197] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:49.258 [2024-12-05 19:40:16.258206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:49.258 [2024-12-05 19:40:16.258213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:49.258 [2024-12-05 19:40:16.258223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:49.258 [2024-12-05 19:40:16.258229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:49.258 [2024-12-05 19:40:16.258237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.258 [2024-12-05 19:40:16.258244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:49.258 [2024-12-05 19:40:16.258256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:21:49.258 [2024-12-05 19:40:16.258263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.258 [2024-12-05 19:40:16.270657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.258 [2024-12-05 19:40:16.270702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:49.258 [2024-12-05 19:40:16.270715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.364 ms 00:21:49.258 [2024-12-05 19:40:16.270723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.258 [2024-12-05 19:40:16.271069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.258 [2024-12-05 19:40:16.271084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:49.258 [2024-12-05 19:40:16.271094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:21:49.258 [2024-12-05 19:40:16.271101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.258 [2024-12-05 19:40:16.305919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.258 [2024-12-05 19:40:16.305945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:49.258 [2024-12-05 19:40:16.305958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.258 [2024-12-05 19:40:16.305966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.258 [2024-12-05 19:40:16.306021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.258 [2024-12-05 19:40:16.306029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:49.258 [2024-12-05 19:40:16.306038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.258 [2024-12-05 19:40:16.306045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.258 [2024-12-05 19:40:16.306128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.258 [2024-12-05 19:40:16.306138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:49.258 [2024-12-05 19:40:16.306148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.258 [2024-12-05 19:40:16.306155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.258 [2024-12-05 19:40:16.306171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.258 [2024-12-05 19:40:16.306178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:49.258 [2024-12-05 19:40:16.306187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.258 [2024-12-05 19:40:16.306194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.258 [2024-12-05 19:40:16.384078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.258 [2024-12-05 19:40:16.384113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:49.258 [2024-12-05 19:40:16.384130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.258 [2024-12-05 19:40:16.384138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.258 [2024-12-05 19:40:16.446545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.258 [2024-12-05 19:40:16.446577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:49.258 [2024-12-05 19:40:16.446590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.258 [2024-12-05 19:40:16.446598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.258 [2024-12-05 19:40:16.446679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.258 [2024-12-05 19:40:16.446690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:49.258 [2024-12-05 19:40:16.446700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.258 [2024-12-05 19:40:16.446708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.258 [2024-12-05 19:40:16.446768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.258 [2024-12-05 19:40:16.446778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:49.258 [2024-12-05 19:40:16.446788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.258 [2024-12-05 19:40:16.446795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.259 [2024-12-05 19:40:16.446880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.259 [2024-12-05 19:40:16.446891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:49.259 [2024-12-05 19:40:16.446903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.259 [2024-12-05 19:40:16.446910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.259 [2024-12-05 19:40:16.446939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.259 [2024-12-05 19:40:16.446947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:49.259 [2024-12-05 19:40:16.446957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.259 [2024-12-05 19:40:16.446964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.259 [2024-12-05 19:40:16.446997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.259 [2024-12-05 19:40:16.447007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:49.259 [2024-12-05 19:40:16.447016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.259 [2024-12-05 19:40:16.447029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.259 [2024-12-05 19:40:16.447070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.259 [2024-12-05 19:40:16.447080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:49.259 [2024-12-05 19:40:16.447090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.259 [2024-12-05 19:40:16.447097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.259 [2024-12-05 19:40:16.447215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.812 ms, result 0 00:21:49.259 true 00:21:49.259 19:40:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76336 00:21:49.259 19:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76336 ']' 00:21:49.259 19:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76336 00:21:49.259 19:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:21:49.259 19:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.259 19:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76336 00:21:49.259 killing process with pid 76336 00:21:49.259 Received shutdown signal, test time was about 4.000000 seconds 00:21:49.259 00:21:49.259 Latency(us) 00:21:49.259 [2024-12-05T19:40:16.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.259 [2024-12-05T19:40:16.514Z] =================================================================================================================== 00:21:49.259 [2024-12-05T19:40:16.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:49.259 19:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:49.259 19:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:49.259 19:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76336' 00:21:49.259 19:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76336 00:21:49.259 19:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76336 00:21:50.199 Remove shared memory files 00:21:50.199 19:40:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:50.199 19:40:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:21:50.199 19:40:17 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:50.199 19:40:17 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:50.199 19:40:17 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:50.199 19:40:17 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:50.199 19:40:17 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:50.199 19:40:17 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:50.199 00:21:50.199 real 0m21.790s 00:21:50.199 user 0m24.467s 00:21:50.199 sys 0m0.787s 00:21:50.199 19:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.199 ************************************ 00:21:50.199 END TEST ftl_bdevperf 00:21:50.199 19:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.199 ************************************ 00:21:50.199 19:40:17 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:50.199 19:40:17 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:50.199 19:40:17 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:50.199 19:40:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:50.199 ************************************ 00:21:50.199 START TEST ftl_trim 00:21:50.199 ************************************ 00:21:50.199 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:50.199 * Looking for test storage... 00:21:50.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:50.460 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:50.460 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:21:50.460 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:50.460 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:50.460 19:40:17 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:50.460 19:40:17 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:50.461 19:40:17 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:21:50.461 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:50.461 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.461 --rc genhtml_branch_coverage=1 00:21:50.461 --rc genhtml_function_coverage=1 00:21:50.461 --rc genhtml_legend=1 00:21:50.461 --rc geninfo_all_blocks=1 00:21:50.461 --rc geninfo_unexecuted_blocks=1 00:21:50.461 00:21:50.461 ' 00:21:50.461 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.461 --rc genhtml_branch_coverage=1 00:21:50.461 --rc genhtml_function_coverage=1 00:21:50.461 --rc genhtml_legend=1 00:21:50.461 --rc geninfo_all_blocks=1 00:21:50.461 --rc geninfo_unexecuted_blocks=1 00:21:50.461 00:21:50.461 ' 00:21:50.461 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.461 --rc genhtml_branch_coverage=1 00:21:50.461 --rc genhtml_function_coverage=1 00:21:50.461 --rc genhtml_legend=1 00:21:50.461 --rc geninfo_all_blocks=1 00:21:50.461 --rc geninfo_unexecuted_blocks=1 00:21:50.461 00:21:50.461 ' 00:21:50.461 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.461 --rc genhtml_branch_coverage=1 00:21:50.461 --rc genhtml_function_coverage=1 00:21:50.461 --rc genhtml_legend=1 00:21:50.461 --rc geninfo_all_blocks=1 00:21:50.461 --rc geninfo_unexecuted_blocks=1 00:21:50.461 00:21:50.461 ' 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76682 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76682 00:21:50.461 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76682 ']' 00:21:50.461 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.461 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.461 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.461 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.461 19:40:17 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:50.461 19:40:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:50.461 [2024-12-05 19:40:17.619313] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:21:50.462 [2024-12-05 19:40:17.619429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76682 ] 00:21:50.723 [2024-12-05 19:40:17.774961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:50.723 [2024-12-05 19:40:17.874373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.723 [2024-12-05 19:40:17.874645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.723 [2024-12-05 19:40:17.874668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.293 19:40:18 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.293 19:40:18 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:51.293 19:40:18 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:51.293 19:40:18 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:51.293 19:40:18 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:51.293 19:40:18 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:51.293 19:40:18 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:51.293 19:40:18 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:51.551 19:40:18 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:51.551 19:40:18 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:51.551 19:40:18 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:51.551 19:40:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:51.551 19:40:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:51.551 19:40:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:51.551 19:40:18 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:51.551 19:40:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:51.812 19:40:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:51.812 { 00:21:51.812 "name": "nvme0n1", 00:21:51.812 "aliases": [ 00:21:51.812 "a98ca0ab-945c-451d-b150-499900d32c2f" 00:21:51.812 ], 00:21:51.812 "product_name": "NVMe disk", 00:21:51.812 "block_size": 4096, 00:21:51.812 "num_blocks": 1310720, 00:21:51.812 "uuid": "a98ca0ab-945c-451d-b150-499900d32c2f", 00:21:51.812 "numa_id": -1, 00:21:51.812 "assigned_rate_limits": { 00:21:51.812 "rw_ios_per_sec": 0, 00:21:51.812 "rw_mbytes_per_sec": 0, 00:21:51.812 "r_mbytes_per_sec": 0, 00:21:51.812 "w_mbytes_per_sec": 0 00:21:51.812 }, 00:21:51.812 "claimed": true, 00:21:51.812 "claim_type": "read_many_write_one", 00:21:51.812 "zoned": false, 00:21:51.812 "supported_io_types": { 00:21:51.812 "read": true, 00:21:51.812 "write": true, 00:21:51.812 "unmap": true, 00:21:51.812 "flush": true, 00:21:51.812 "reset": true, 00:21:51.812 "nvme_admin": true, 00:21:51.812 "nvme_io": true, 00:21:51.812 "nvme_io_md": false, 00:21:51.812 "write_zeroes": true, 00:21:51.812 "zcopy": false, 00:21:51.812 "get_zone_info": false, 00:21:51.812 "zone_management": false, 00:21:51.812 "zone_append": false, 00:21:51.812 "compare": true, 00:21:51.812 "compare_and_write": false, 00:21:51.812 "abort": true, 00:21:51.812 "seek_hole": false, 00:21:51.812 "seek_data": false, 00:21:51.812 "copy": true, 00:21:51.812 "nvme_iov_md": false 00:21:51.812 }, 00:21:51.812 "driver_specific": { 00:21:51.812 "nvme": [ 00:21:51.812 { 00:21:51.812 "pci_address": "0000:00:11.0", 00:21:51.812 "trid": { 00:21:51.812 "trtype": "PCIe", 00:21:51.812 "traddr": "0000:00:11.0" 00:21:51.812 }, 00:21:51.812 "ctrlr_data": { 00:21:51.812 "cntlid": 0, 00:21:51.812 "vendor_id": "0x1b36", 00:21:51.812 "model_number": "QEMU NVMe Ctrl", 00:21:51.812 "serial_number": "12341", 00:21:51.812 "firmware_revision": "8.0.0", 00:21:51.812 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:51.812 "oacs": { 00:21:51.812 "security": 0, 00:21:51.812 "format": 1, 00:21:51.812 "firmware": 0, 00:21:51.812 "ns_manage": 1 00:21:51.812 }, 00:21:51.812 "multi_ctrlr": false, 00:21:51.812 "ana_reporting": false 00:21:51.812 }, 00:21:51.812 "vs": { 00:21:51.812 "nvme_version": "1.4" 00:21:51.812 }, 00:21:51.812 "ns_data": { 00:21:51.812 "id": 1, 00:21:51.812 "can_share": false 00:21:51.812 } 00:21:51.812 } 00:21:51.812 ], 00:21:51.812 "mp_policy": "active_passive" 00:21:51.812 } 00:21:51.812 } 00:21:51.812 ]' 00:21:51.812 19:40:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:51.812 19:40:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:51.812 19:40:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:51.812 19:40:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:51.812 19:40:19 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:51.812 19:40:19 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:21:51.812 19:40:19 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:51.812 19:40:19 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:51.812 19:40:19 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:51.812 19:40:19 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:51.812 19:40:19 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:52.073 19:40:19 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=2c3972de-33b9-478e-9adc-e832bd5687bc 00:21:52.073 19:40:19 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:52.073 19:40:19 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c3972de-33b9-478e-9adc-e832bd5687bc 00:21:52.333 19:40:19 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:52.593 19:40:19 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=afa4f4f1-fad5-4ca1-b9ec-6a51cb9c58e4 00:21:52.593 19:40:19 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u afa4f4f1-fad5-4ca1-b9ec-6a51cb9c58e4 00:21:52.853 19:40:19 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7b5184af-ab97-4370-8c7f-f13a30b77a38 00:21:52.853 19:40:19 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7b5184af-ab97-4370-8c7f-f13a30b77a38 00:21:52.853 19:40:19 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:52.853 19:40:19 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:52.853 19:40:19 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7b5184af-ab97-4370-8c7f-f13a30b77a38 00:21:52.853 19:40:19 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:52.853 19:40:19 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7b5184af-ab97-4370-8c7f-f13a30b77a38 00:21:52.853 19:40:19 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7b5184af-ab97-4370-8c7f-f13a30b77a38 00:21:52.853 19:40:19 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:52.853 19:40:19 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:52.853 19:40:19 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:52.853 19:40:19 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b5184af-ab97-4370-8c7f-f13a30b77a38 00:21:52.853 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:52.853 { 00:21:52.853 "name": "7b5184af-ab97-4370-8c7f-f13a30b77a38", 00:21:52.853 "aliases": [ 00:21:52.853 "lvs/nvme0n1p0" 00:21:52.853 ], 00:21:52.853 "product_name": "Logical Volume", 00:21:52.853 "block_size": 4096, 00:21:52.853 "num_blocks": 26476544, 00:21:52.853 "uuid": "7b5184af-ab97-4370-8c7f-f13a30b77a38", 00:21:52.853 "assigned_rate_limits": { 00:21:52.853 "rw_ios_per_sec": 0, 00:21:52.853 "rw_mbytes_per_sec": 0, 00:21:52.853 "r_mbytes_per_sec": 0, 00:21:52.853 "w_mbytes_per_sec": 0 00:21:52.853 }, 00:21:52.853 "claimed": false, 00:21:52.853 "zoned": false, 00:21:52.853 "supported_io_types": { 00:21:52.853 "read": true, 00:21:52.853 "write": true, 00:21:52.853 "unmap": true, 00:21:52.853 "flush": false, 00:21:52.853 "reset": true, 00:21:52.853 "nvme_admin": false, 00:21:52.853 "nvme_io": false, 00:21:52.853 "nvme_io_md": false, 00:21:52.853 "write_zeroes": true, 00:21:52.853 "zcopy": false, 00:21:52.853 "get_zone_info": false, 00:21:52.853 "zone_management": false, 00:21:52.853 "zone_append": false, 00:21:52.853 "compare": false, 00:21:52.853 "compare_and_write": false, 00:21:52.853 "abort": false, 00:21:52.853 "seek_hole": true, 00:21:52.853 "seek_data": true, 00:21:52.853 "copy": false, 00:21:52.853 "nvme_iov_md": false 00:21:52.853 }, 00:21:52.853 "driver_specific": { 00:21:52.853 "lvol": { 00:21:52.853 "lvol_store_uuid": "afa4f4f1-fad5-4ca1-b9ec-6a51cb9c58e4", 00:21:52.853 "base_bdev": "nvme0n1", 00:21:52.853 "thin_provision": true, 00:21:52.853 "num_allocated_clusters": 0, 00:21:52.853 "snapshot": false, 00:21:52.853 "clone": false, 00:21:52.853 "esnap_clone": false 00:21:52.853 } 00:21:52.853 } 00:21:52.853 } 00:21:52.853 ]' 00:21:52.853 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:52.853 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:52.853 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:53.113 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:53.113 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:53.113 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:53.113 19:40:20 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:53.113 19:40:20 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:53.113 19:40:20 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:53.372 19:40:20 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:53.373 19:40:20 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:53.373 19:40:20 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7b5184af-ab97-4370-8c7f-f13a30b77a38 00:21:53.373 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7b5184af-ab97-4370-8c7f-f13a30b77a38 00:21:53.373 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:53.373 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:53.373 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:53.373 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b5184af-ab97-4370-8c7f-f13a30b77a38 00:21:53.373 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:53.373 { 00:21:53.373 "name": "7b5184af-ab97-4370-8c7f-f13a30b77a38", 00:21:53.373 "aliases": [ 00:21:53.373 "lvs/nvme0n1p0" 00:21:53.373 ], 00:21:53.373 "product_name": "Logical Volume", 00:21:53.373 "block_size": 4096, 00:21:53.373 "num_blocks": 26476544, 00:21:53.373 "uuid": "7b5184af-ab97-4370-8c7f-f13a30b77a38", 00:21:53.373 "assigned_rate_limits": { 00:21:53.373 "rw_ios_per_sec": 0, 00:21:53.373 "rw_mbytes_per_sec": 0, 00:21:53.373 "r_mbytes_per_sec": 0, 00:21:53.373 "w_mbytes_per_sec": 0 00:21:53.373 }, 00:21:53.373 "claimed": false, 00:21:53.373 "zoned": false, 00:21:53.373 "supported_io_types": { 00:21:53.373 "read": true, 00:21:53.373 "write": true, 00:21:53.373 "unmap": true, 00:21:53.373 "flush": false, 00:21:53.373 "reset": true, 00:21:53.373 "nvme_admin": false, 00:21:53.373 "nvme_io": false, 00:21:53.373 "nvme_io_md": false, 00:21:53.373 "write_zeroes": true, 00:21:53.373 "zcopy": false, 00:21:53.373 "get_zone_info": false, 00:21:53.373 "zone_management": false, 00:21:53.373 "zone_append": false, 00:21:53.373 "compare": false, 00:21:53.373 "compare_and_write": false, 00:21:53.373 "abort": false, 00:21:53.373 "seek_hole": true, 00:21:53.373 "seek_data": true, 00:21:53.373 "copy": false, 00:21:53.373 "nvme_iov_md": false 00:21:53.373 }, 00:21:53.373 "driver_specific": { 00:21:53.373 "lvol": { 00:21:53.373 "lvol_store_uuid": "afa4f4f1-fad5-4ca1-b9ec-6a51cb9c58e4", 00:21:53.373 "base_bdev": "nvme0n1", 00:21:53.373 "thin_provision": true, 00:21:53.373 "num_allocated_clusters": 0, 00:21:53.373 "snapshot": false, 00:21:53.373 "clone": false, 00:21:53.373 "esnap_clone": false 00:21:53.373 } 00:21:53.373 } 00:21:53.373 } 00:21:53.373 ]' 00:21:53.373 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:53.373 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:53.373 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:53.632 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:53.632 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:53.632 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:53.632 19:40:20 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:53.632 19:40:20 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:53.632 19:40:20 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:53.632 19:40:20 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:53.632 19:40:20 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7b5184af-ab97-4370-8c7f-f13a30b77a38 00:21:53.632 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7b5184af-ab97-4370-8c7f-f13a30b77a38 00:21:53.632 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:53.632 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:53.632 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:53.632 19:40:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b5184af-ab97-4370-8c7f-f13a30b77a38 00:21:53.891 19:40:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:53.891 { 00:21:53.891 "name": "7b5184af-ab97-4370-8c7f-f13a30b77a38", 00:21:53.891 "aliases": [ 00:21:53.891 "lvs/nvme0n1p0" 00:21:53.891 ], 00:21:53.891 "product_name": "Logical Volume", 00:21:53.891 "block_size": 4096, 00:21:53.891 "num_blocks": 26476544, 00:21:53.891 "uuid": "7b5184af-ab97-4370-8c7f-f13a30b77a38", 00:21:53.891 "assigned_rate_limits": { 00:21:53.891 "rw_ios_per_sec": 0, 00:21:53.891 "rw_mbytes_per_sec": 0, 00:21:53.891 "r_mbytes_per_sec": 0, 00:21:53.891 "w_mbytes_per_sec": 0 00:21:53.891 }, 00:21:53.891 "claimed": false, 00:21:53.891 "zoned": false, 00:21:53.891 "supported_io_types": { 00:21:53.891 "read": true, 00:21:53.891 "write": true, 00:21:53.891 "unmap": true, 00:21:53.891 "flush": false, 00:21:53.891 "reset": true, 00:21:53.891 "nvme_admin": false, 00:21:53.891 "nvme_io": false, 00:21:53.891 "nvme_io_md": false, 00:21:53.891 "write_zeroes": true, 00:21:53.891 "zcopy": false, 00:21:53.891 "get_zone_info": false, 00:21:53.891 "zone_management": false, 00:21:53.891 "zone_append": false, 00:21:53.891 "compare": false, 00:21:53.891 "compare_and_write": false, 00:21:53.891 "abort": false, 00:21:53.891 "seek_hole": true, 00:21:53.891 "seek_data": true, 00:21:53.891 "copy": false, 00:21:53.891 "nvme_iov_md": false 00:21:53.891 }, 00:21:53.891 "driver_specific": { 00:21:53.891 "lvol": { 00:21:53.891 "lvol_store_uuid": "afa4f4f1-fad5-4ca1-b9ec-6a51cb9c58e4", 00:21:53.891 "base_bdev": "nvme0n1", 00:21:53.891 "thin_provision": true, 00:21:53.891 "num_allocated_clusters": 0, 00:21:53.891 "snapshot": false, 00:21:53.891 "clone": false, 00:21:53.891 "esnap_clone": false 00:21:53.891 } 00:21:53.891 } 00:21:53.891 } 00:21:53.891 ]' 00:21:53.891 19:40:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:53.891 19:40:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:53.891 19:40:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:53.891 19:40:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:53.891 19:40:21 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:53.891 19:40:21 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:53.892 19:40:21 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:53.892 19:40:21 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7b5184af-ab97-4370-8c7f-f13a30b77a38 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:54.156 [2024-12-05 19:40:21.303774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.156 [2024-12-05 19:40:21.303940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:54.156 [2024-12-05 19:40:21.303965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:54.156 [2024-12-05 19:40:21.303974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.156 [2024-12-05 19:40:21.307084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.156 [2024-12-05 19:40:21.307214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:54.156 [2024-12-05 19:40:21.307235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.083 ms 00:21:54.156 [2024-12-05 19:40:21.307245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.156 [2024-12-05 19:40:21.307353] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:54.156 [2024-12-05 19:40:21.308392] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:54.156 [2024-12-05 19:40:21.308440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.156 [2024-12-05 19:40:21.308452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:54.156 [2024-12-05 19:40:21.308463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:21:54.156 [2024-12-05 19:40:21.308471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.156 [2024-12-05 19:40:21.308573] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2e1e31a5-869f-42dd-82c7-3d82fe790364 00:21:54.156 [2024-12-05 19:40:21.309640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.156 [2024-12-05 19:40:21.309682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:54.156 [2024-12-05 19:40:21.309692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:54.156 [2024-12-05 19:40:21.309702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.156 [2024-12-05 19:40:21.314913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.156 [2024-12-05 19:40:21.314941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:54.156 [2024-12-05 19:40:21.314952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.114 ms 00:21:54.156 [2024-12-05 19:40:21.314961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.156 [2024-12-05 19:40:21.315074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.157 [2024-12-05 19:40:21.315086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:54.157 [2024-12-05 19:40:21.315094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:54.157 [2024-12-05 19:40:21.315106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.157 [2024-12-05 19:40:21.315141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.157 [2024-12-05 19:40:21.315151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:54.157 [2024-12-05 19:40:21.315158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:54.157 [2024-12-05 19:40:21.315169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.157 [2024-12-05 19:40:21.315203] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:54.157 [2024-12-05 19:40:21.318777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.157 [2024-12-05 19:40:21.318805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:54.157 [2024-12-05 19:40:21.318817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.577 ms 00:21:54.157 [2024-12-05 19:40:21.318825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.157 [2024-12-05 19:40:21.318894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.157 [2024-12-05 19:40:21.318917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:54.157 [2024-12-05 19:40:21.318927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:54.157 [2024-12-05 19:40:21.318934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.157 [2024-12-05 19:40:21.318963] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:54.157 [2024-12-05 19:40:21.319098] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:54.157 [2024-12-05 19:40:21.319113] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:54.157 [2024-12-05 19:40:21.319124] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:54.157 [2024-12-05 19:40:21.319135] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:54.157 [2024-12-05 19:40:21.319144] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:54.157 [2024-12-05 19:40:21.319153] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:54.157 [2024-12-05 19:40:21.319160] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:54.157 [2024-12-05 19:40:21.319170] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:54.157 [2024-12-05 19:40:21.319180] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:54.157 [2024-12-05 19:40:21.319188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.157 [2024-12-05 19:40:21.319196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:54.157 [2024-12-05 19:40:21.319205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:21:54.157 [2024-12-05 19:40:21.319212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.157 [2024-12-05 19:40:21.319322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.157 [2024-12-05 19:40:21.319331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:54.157 [2024-12-05 19:40:21.319340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:54.157 [2024-12-05 19:40:21.319348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.157 [2024-12-05 19:40:21.319461] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:54.157 [2024-12-05 19:40:21.319470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:54.157 [2024-12-05 19:40:21.319479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:54.157 [2024-12-05 19:40:21.319487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:54.157 [2024-12-05 19:40:21.319503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:54.157 [2024-12-05 19:40:21.319518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:54.157 [2024-12-05 19:40:21.319527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:54.157 [2024-12-05 19:40:21.319542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:54.157 [2024-12-05 19:40:21.319549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:54.157 [2024-12-05 19:40:21.319558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:54.157 [2024-12-05 19:40:21.319565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:54.157 [2024-12-05 19:40:21.319580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:54.157 [2024-12-05 19:40:21.319586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:54.157 [2024-12-05 19:40:21.319603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:54.157 [2024-12-05 19:40:21.319610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:54.157 [2024-12-05 19:40:21.319626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.157 [2024-12-05 19:40:21.319640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:54.157 [2024-12-05 19:40:21.319647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.157 [2024-12-05 19:40:21.319662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:54.157 [2024-12-05 19:40:21.319688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.157 [2024-12-05 19:40:21.319704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:54.157 [2024-12-05 19:40:21.319710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.157 [2024-12-05 19:40:21.319728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:54.157 [2024-12-05 19:40:21.319738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:54.157 [2024-12-05 19:40:21.319753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:54.157 [2024-12-05 19:40:21.319760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:54.157 [2024-12-05 19:40:21.319768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:54.157 [2024-12-05 19:40:21.319775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:54.157 [2024-12-05 19:40:21.319784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:54.157 [2024-12-05 19:40:21.319791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:54.157 [2024-12-05 19:40:21.319806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:54.157 [2024-12-05 19:40:21.319814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319821] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:54.157 [2024-12-05 19:40:21.319830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:54.157 [2024-12-05 19:40:21.319837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:54.157 [2024-12-05 19:40:21.319846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.157 [2024-12-05 19:40:21.319854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:54.157 [2024-12-05 19:40:21.319864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:54.157 [2024-12-05 19:40:21.319870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:54.157 [2024-12-05 19:40:21.319879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:54.157 [2024-12-05 19:40:21.319885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:54.157 [2024-12-05 19:40:21.319893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:54.157 [2024-12-05 19:40:21.319902] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:54.157 [2024-12-05 19:40:21.319912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:54.157 [2024-12-05 19:40:21.319922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:54.157 [2024-12-05 19:40:21.319931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:54.157 [2024-12-05 19:40:21.319938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:54.157 [2024-12-05 19:40:21.319947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:54.157 [2024-12-05 19:40:21.319954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:54.157 [2024-12-05 19:40:21.319962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:54.157 [2024-12-05 19:40:21.319970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:54.157 [2024-12-05 19:40:21.319978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:54.157 [2024-12-05 19:40:21.319986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:54.157 [2024-12-05 19:40:21.319997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:54.157 [2024-12-05 19:40:21.320004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:54.157 [2024-12-05 19:40:21.320013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:54.158 [2024-12-05 19:40:21.320020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:54.158 [2024-12-05 19:40:21.320028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:54.158 [2024-12-05 19:40:21.320036] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:54.158 [2024-12-05 19:40:21.320047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:54.158 [2024-12-05 19:40:21.320055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:54.158 [2024-12-05 19:40:21.320064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:54.158 [2024-12-05 19:40:21.320071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:54.158 [2024-12-05 19:40:21.320080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:54.158 [2024-12-05 19:40:21.320088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.158 [2024-12-05 19:40:21.320097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:54.158 [2024-12-05 19:40:21.320105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:21:54.158 [2024-12-05 19:40:21.320113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.158 [2024-12-05 19:40:21.320184] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:54.158 [2024-12-05 19:40:21.320196] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:58.454 [2024-12-05 19:40:25.588582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.454 [2024-12-05 19:40:25.588644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:58.454 [2024-12-05 19:40:25.588660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4268.382 ms 00:21:58.454 [2024-12-05 19:40:25.588678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.454 [2024-12-05 19:40:25.614112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.454 [2024-12-05 19:40:25.614290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.454 [2024-12-05 19:40:25.614308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.181 ms 00:21:58.454 [2024-12-05 19:40:25.614318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.454 [2024-12-05 19:40:25.614444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.454 [2024-12-05 19:40:25.614456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:58.454 [2024-12-05 19:40:25.614481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:58.454 [2024-12-05 19:40:25.614493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.454 [2024-12-05 19:40:25.655091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.454 [2024-12-05 19:40:25.655136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:58.454 [2024-12-05 19:40:25.655149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.567 ms 00:21:58.454 [2024-12-05 19:40:25.655159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.454 [2024-12-05 19:40:25.655255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.454 [2024-12-05 19:40:25.655269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.454 [2024-12-05 19:40:25.655278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:58.454 [2024-12-05 19:40:25.655287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.454 [2024-12-05 19:40:25.655613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.454 [2024-12-05 19:40:25.655634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.454 [2024-12-05 19:40:25.655642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:21:58.454 [2024-12-05 19:40:25.655651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.454 [2024-12-05 19:40:25.655791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.454 [2024-12-05 19:40:25.655803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.454 [2024-12-05 19:40:25.655824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:21:58.454 [2024-12-05 19:40:25.655835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.454 [2024-12-05 19:40:25.670308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.454 [2024-12-05 19:40:25.670341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.454 [2024-12-05 19:40:25.670352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.446 ms 00:21:58.454 [2024-12-05 19:40:25.670361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.454 [2024-12-05 19:40:25.682222] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:58.454 [2024-12-05 19:40:25.696883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.454 [2024-12-05 19:40:25.697033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:58.454 [2024-12-05 19:40:25.697053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.433 ms 00:21:58.454 [2024-12-05 19:40:25.697061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.716 [2024-12-05 19:40:25.787515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.716 [2024-12-05 19:40:25.787584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:58.716 [2024-12-05 19:40:25.787600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.385 ms 00:21:58.716 [2024-12-05 19:40:25.787609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.716 [2024-12-05 19:40:25.787885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.716 [2024-12-05 19:40:25.787902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:58.716 [2024-12-05 19:40:25.787914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:21:58.716 [2024-12-05 19:40:25.787922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.716 [2024-12-05 19:40:25.812938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.716 [2024-12-05 19:40:25.812975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:58.716 [2024-12-05 19:40:25.812989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.985 ms 00:21:58.716 [2024-12-05 19:40:25.812998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.716 [2024-12-05 19:40:25.837201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.716 [2024-12-05 19:40:25.837237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:58.716 [2024-12-05 19:40:25.837252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.139 ms 00:21:58.716 [2024-12-05 19:40:25.837261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.716 [2024-12-05 19:40:25.837872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.716 [2024-12-05 19:40:25.837889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:58.716 [2024-12-05 19:40:25.837900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:21:58.716 [2024-12-05 19:40:25.837907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.716 [2024-12-05 19:40:25.915413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.716 [2024-12-05 19:40:25.915586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:58.716 [2024-12-05 19:40:25.915609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.458 ms 00:21:58.716 [2024-12-05 19:40:25.915619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.716 [2024-12-05 19:40:25.941353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.716 [2024-12-05 19:40:25.941392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:58.716 [2024-12-05 19:40:25.941405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.640 ms 00:21:58.716 [2024-12-05 19:40:25.941413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.716 [2024-12-05 19:40:25.966303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.716 [2024-12-05 19:40:25.966339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:58.716 [2024-12-05 19:40:25.966352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.828 ms 00:21:58.716 [2024-12-05 19:40:25.966361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.978 [2024-12-05 19:40:25.992480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.978 [2024-12-05 19:40:25.992527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:58.978 [2024-12-05 19:40:25.992540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.039 ms 00:21:58.978 [2024-12-05 19:40:25.992547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.978 [2024-12-05 19:40:25.992611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.978 [2024-12-05 19:40:25.992625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:58.978 [2024-12-05 19:40:25.992638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:58.978 [2024-12-05 19:40:25.992645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.978 [2024-12-05 19:40:25.992736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.978 [2024-12-05 19:40:25.992746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:58.978 [2024-12-05 19:40:25.992756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:58.978 [2024-12-05 19:40:25.992763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.978 [2024-12-05 19:40:25.993531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:58.978 [2024-12-05 19:40:25.996596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4689.475 ms, result 0 00:21:58.978 [2024-12-05 19:40:25.998060] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:58.978 { 00:21:58.978 "name": "ftl0", 00:21:58.978 "uuid": "2e1e31a5-869f-42dd-82c7-3d82fe790364" 00:21:58.978 } 00:21:58.978 19:40:26 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:58.978 19:40:26 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:58.978 19:40:26 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:58.978 19:40:26 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:21:58.978 19:40:26 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:58.978 19:40:26 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:58.978 19:40:26 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:58.978 19:40:26 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:59.240 [ 00:21:59.240 { 00:21:59.240 "name": "ftl0", 00:21:59.240 "aliases": [ 00:21:59.240 "2e1e31a5-869f-42dd-82c7-3d82fe790364" 00:21:59.240 ], 00:21:59.240 "product_name": "FTL disk", 00:21:59.240 "block_size": 4096, 00:21:59.240 "num_blocks": 23592960, 00:21:59.240 "uuid": "2e1e31a5-869f-42dd-82c7-3d82fe790364", 00:21:59.240 "assigned_rate_limits": { 00:21:59.240 "rw_ios_per_sec": 0, 00:21:59.240 "rw_mbytes_per_sec": 0, 00:21:59.240 "r_mbytes_per_sec": 0, 00:21:59.240 "w_mbytes_per_sec": 0 00:21:59.240 }, 00:21:59.240 "claimed": false, 00:21:59.240 "zoned": false, 00:21:59.240 "supported_io_types": { 00:21:59.240 "read": true, 00:21:59.240 "write": true, 00:21:59.240 "unmap": true, 00:21:59.240 "flush": true, 00:21:59.240 "reset": false, 00:21:59.240 "nvme_admin": false, 00:21:59.240 "nvme_io": false, 00:21:59.240 "nvme_io_md": false, 00:21:59.240 "write_zeroes": true, 00:21:59.240 "zcopy": false, 00:21:59.240 "get_zone_info": false, 00:21:59.240 "zone_management": false, 00:21:59.240 "zone_append": false, 00:21:59.240 "compare": false, 00:21:59.240 "compare_and_write": false, 00:21:59.240 "abort": false, 00:21:59.240 "seek_hole": false, 00:21:59.240 "seek_data": false, 00:21:59.240 "copy": false, 00:21:59.240 "nvme_iov_md": false 00:21:59.240 }, 00:21:59.240 "driver_specific": { 00:21:59.240 "ftl": { 00:21:59.240 "base_bdev": "7b5184af-ab97-4370-8c7f-f13a30b77a38", 00:21:59.240 "cache": "nvc0n1p0" 00:21:59.240 } 00:21:59.240 } 00:21:59.240 } 00:21:59.240 ] 00:21:59.240 19:40:26 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:21:59.240 19:40:26 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:59.240 19:40:26 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:59.503 19:40:26 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:59.503 19:40:26 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:59.765 19:40:26 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:59.765 { 00:21:59.765 "name": "ftl0", 00:21:59.765 "aliases": [ 00:21:59.765 "2e1e31a5-869f-42dd-82c7-3d82fe790364" 00:21:59.765 ], 00:21:59.765 "product_name": "FTL disk", 00:21:59.765 "block_size": 4096, 00:21:59.765 "num_blocks": 23592960, 00:21:59.765 "uuid": "2e1e31a5-869f-42dd-82c7-3d82fe790364", 00:21:59.765 "assigned_rate_limits": { 00:21:59.765 "rw_ios_per_sec": 0, 00:21:59.765 "rw_mbytes_per_sec": 0, 00:21:59.765 "r_mbytes_per_sec": 0, 00:21:59.765 "w_mbytes_per_sec": 0 00:21:59.765 }, 00:21:59.765 "claimed": false, 00:21:59.765 "zoned": false, 00:21:59.765 "supported_io_types": { 00:21:59.765 "read": true, 00:21:59.765 "write": true, 00:21:59.765 "unmap": true, 00:21:59.765 "flush": true, 00:21:59.765 "reset": false, 00:21:59.765 "nvme_admin": false, 00:21:59.765 "nvme_io": false, 00:21:59.765 "nvme_io_md": false, 00:21:59.765 "write_zeroes": true, 00:21:59.765 "zcopy": false, 00:21:59.765 "get_zone_info": false, 00:21:59.765 "zone_management": false, 00:21:59.765 "zone_append": false, 00:21:59.765 "compare": false, 00:21:59.765 "compare_and_write": false, 00:21:59.765 "abort": false, 00:21:59.765 "seek_hole": false, 00:21:59.765 "seek_data": false, 00:21:59.765 "copy": false, 00:21:59.765 "nvme_iov_md": false 00:21:59.765 }, 00:21:59.765 "driver_specific": { 00:21:59.765 "ftl": { 00:21:59.765 "base_bdev": "7b5184af-ab97-4370-8c7f-f13a30b77a38", 00:21:59.765 "cache": "nvc0n1p0" 00:21:59.765 } 00:21:59.765 } 00:21:59.765 } 00:21:59.765 ]' 00:21:59.765 19:40:26 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:59.765 19:40:26 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:59.765 19:40:26 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:00.028 [2024-12-05 19:40:27.041488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.028 [2024-12-05 19:40:27.041537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:00.028 [2024-12-05 19:40:27.041552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:00.028 [2024-12-05 19:40:27.041564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.028 [2024-12-05 19:40:27.041595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:00.028 [2024-12-05 19:40:27.044245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.028 [2024-12-05 19:40:27.044367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:00.028 [2024-12-05 19:40:27.044392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.632 ms 00:22:00.028 [2024-12-05 19:40:27.044399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.028 [2024-12-05 19:40:27.044879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.028 [2024-12-05 19:40:27.044889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:00.028 [2024-12-05 19:40:27.044900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:22:00.028 [2024-12-05 19:40:27.044907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.028 [2024-12-05 19:40:27.048546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.028 [2024-12-05 19:40:27.048567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:00.028 [2024-12-05 19:40:27.048578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.612 ms 00:22:00.028 [2024-12-05 19:40:27.048587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.028 [2024-12-05 19:40:27.055676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.028 [2024-12-05 19:40:27.055704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:00.028 [2024-12-05 19:40:27.055716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.023 ms 00:22:00.028 [2024-12-05 19:40:27.055723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.028 [2024-12-05 19:40:27.080539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.028 [2024-12-05 19:40:27.080571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:00.028 [2024-12-05 19:40:27.080587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.744 ms 00:22:00.028 [2024-12-05 19:40:27.080595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.028 [2024-12-05 19:40:27.096113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.028 [2024-12-05 19:40:27.096145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:00.028 [2024-12-05 19:40:27.096159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.460 ms 00:22:00.028 [2024-12-05 19:40:27.096170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.028 [2024-12-05 19:40:27.096375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.028 [2024-12-05 19:40:27.096385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:00.028 [2024-12-05 19:40:27.096395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:22:00.028 [2024-12-05 19:40:27.096402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.028 [2024-12-05 19:40:27.120386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.028 [2024-12-05 19:40:27.120416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:00.028 [2024-12-05 19:40:27.120428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.956 ms 00:22:00.028 [2024-12-05 19:40:27.120435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.028 [2024-12-05 19:40:27.143983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.028 [2024-12-05 19:40:27.144013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:00.028 [2024-12-05 19:40:27.144028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.497 ms 00:22:00.028 [2024-12-05 19:40:27.144036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.028 [2024-12-05 19:40:27.167142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.028 [2024-12-05 19:40:27.167260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:00.028 [2024-12-05 19:40:27.167278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.051 ms 00:22:00.028 [2024-12-05 19:40:27.167285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.028 [2024-12-05 19:40:27.190666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.028 [2024-12-05 19:40:27.190703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:00.028 [2024-12-05 19:40:27.190717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.289 ms 00:22:00.028 [2024-12-05 19:40:27.190724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.028 [2024-12-05 19:40:27.190780] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:00.028 [2024-12-05 19:40:27.190794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.190999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:00.028 [2024-12-05 19:40:27.191116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:00.029 [2024-12-05 19:40:27.191660] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:00.029 [2024-12-05 19:40:27.191687] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e1e31a5-869f-42dd-82c7-3d82fe790364 00:22:00.029 [2024-12-05 19:40:27.191696] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:00.029 [2024-12-05 19:40:27.191836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:00.029 [2024-12-05 19:40:27.191844] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:00.029 [2024-12-05 19:40:27.191855] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:00.029 [2024-12-05 19:40:27.191862] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:00.029 [2024-12-05 19:40:27.191871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:00.029 [2024-12-05 19:40:27.191878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:00.029 [2024-12-05 19:40:27.191886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:00.029 [2024-12-05 19:40:27.191892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:00.029 [2024-12-05 19:40:27.191901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.029 [2024-12-05 19:40:27.191908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:00.029 [2024-12-05 19:40:27.191918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:22:00.029 [2024-12-05 19:40:27.191925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.029 [2024-12-05 19:40:27.204486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.029 [2024-12-05 19:40:27.204517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:00.029 [2024-12-05 19:40:27.204532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.520 ms 00:22:00.029 [2024-12-05 19:40:27.204540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.029 [2024-12-05 19:40:27.204945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.029 [2024-12-05 19:40:27.204958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:00.029 [2024-12-05 19:40:27.204967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:22:00.029 [2024-12-05 19:40:27.204975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.029 [2024-12-05 19:40:27.248191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.029 [2024-12-05 19:40:27.248225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.029 [2024-12-05 19:40:27.248238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.029 [2024-12-05 19:40:27.248246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.029 [2024-12-05 19:40:27.248337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.029 [2024-12-05 19:40:27.248347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.029 [2024-12-05 19:40:27.248359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.029 [2024-12-05 19:40:27.248367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.030 [2024-12-05 19:40:27.248424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.030 [2024-12-05 19:40:27.248435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.030 [2024-12-05 19:40:27.248449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.030 [2024-12-05 19:40:27.248457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.030 [2024-12-05 19:40:27.248486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.030 [2024-12-05 19:40:27.248495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.030 [2024-12-05 19:40:27.248505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.030 [2024-12-05 19:40:27.248513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.292 [2024-12-05 19:40:27.328570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.292 [2024-12-05 19:40:27.328614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.292 [2024-12-05 19:40:27.328626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.292 [2024-12-05 19:40:27.328634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.292 [2024-12-05 19:40:27.391787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.292 [2024-12-05 19:40:27.391826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.292 [2024-12-05 19:40:27.391838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.292 [2024-12-05 19:40:27.391846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.292 [2024-12-05 19:40:27.391928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.292 [2024-12-05 19:40:27.391938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.292 [2024-12-05 19:40:27.391950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.292 [2024-12-05 19:40:27.391959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.292 [2024-12-05 19:40:27.391999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.292 [2024-12-05 19:40:27.392008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.292 [2024-12-05 19:40:27.392017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.292 [2024-12-05 19:40:27.392024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.292 [2024-12-05 19:40:27.392127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.292 [2024-12-05 19:40:27.392136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.292 [2024-12-05 19:40:27.392145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.292 [2024-12-05 19:40:27.392154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.292 [2024-12-05 19:40:27.392204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.292 [2024-12-05 19:40:27.392213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:00.292 [2024-12-05 19:40:27.392222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.292 [2024-12-05 19:40:27.392228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.292 [2024-12-05 19:40:27.392272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.292 [2024-12-05 19:40:27.392281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.292 [2024-12-05 19:40:27.392292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.292 [2024-12-05 19:40:27.392299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.292 [2024-12-05 19:40:27.392360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.292 [2024-12-05 19:40:27.392370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.292 [2024-12-05 19:40:27.392379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.292 [2024-12-05 19:40:27.392386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.292 [2024-12-05 19:40:27.392545] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 351.041 ms, result 0 00:22:00.292 true 00:22:00.292 19:40:27 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76682 00:22:00.292 19:40:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76682 ']' 00:22:00.292 19:40:27 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76682 00:22:00.292 19:40:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:00.292 19:40:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.292 19:40:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76682 00:22:00.292 killing process with pid 76682 00:22:00.292 19:40:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:00.292 19:40:27 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:00.292 19:40:27 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76682' 00:22:00.292 19:40:27 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76682 00:22:00.292 19:40:27 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76682 00:22:04.496 19:40:31 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:22:05.438 65536+0 records in 00:22:05.438 65536+0 records out 00:22:05.438 268435456 bytes (268 MB, 256 MiB) copied, 1.07623 s, 249 MB/s 00:22:05.438 19:40:32 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:05.698 [2024-12-05 19:40:32.731792] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:22:05.698 [2024-12-05 19:40:32.731909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76881 ] 00:22:05.698 [2024-12-05 19:40:32.891744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.959 [2024-12-05 19:40:32.993913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.221 [2024-12-05 19:40:33.252168] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:06.221 [2024-12-05 19:40:33.252237] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:06.221 [2024-12-05 19:40:33.406664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.221 [2024-12-05 19:40:33.406722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:06.221 [2024-12-05 19:40:33.406735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:06.221 [2024-12-05 19:40:33.406743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.221 [2024-12-05 19:40:33.409347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.221 [2024-12-05 19:40:33.409384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.221 [2024-12-05 19:40:33.409394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:22:06.221 [2024-12-05 19:40:33.409401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.221 [2024-12-05 19:40:33.409524] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:06.221 [2024-12-05 19:40:33.410289] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:06.221 [2024-12-05 19:40:33.410322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.221 [2024-12-05 19:40:33.410330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.221 [2024-12-05 19:40:33.410339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:22:06.221 [2024-12-05 19:40:33.410346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.221 [2024-12-05 19:40:33.411452] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:06.221 [2024-12-05 19:40:33.424297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.221 [2024-12-05 19:40:33.424329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:06.221 [2024-12-05 19:40:33.424340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.847 ms 00:22:06.221 [2024-12-05 19:40:33.424348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.221 [2024-12-05 19:40:33.424432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.221 [2024-12-05 19:40:33.424444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:06.221 [2024-12-05 19:40:33.424452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:06.221 [2024-12-05 19:40:33.424459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.221 [2024-12-05 19:40:33.429282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.221 [2024-12-05 19:40:33.429310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.221 [2024-12-05 19:40:33.429320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.783 ms 00:22:06.221 [2024-12-05 19:40:33.429327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.221 [2024-12-05 19:40:33.429409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.221 [2024-12-05 19:40:33.429418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.221 [2024-12-05 19:40:33.429427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:06.221 [2024-12-05 19:40:33.429435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.221 [2024-12-05 19:40:33.429460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.221 [2024-12-05 19:40:33.429468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:06.221 [2024-12-05 19:40:33.429475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:06.221 [2024-12-05 19:40:33.429482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.221 [2024-12-05 19:40:33.429501] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:06.221 [2024-12-05 19:40:33.432885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.221 [2024-12-05 19:40:33.432910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.221 [2024-12-05 19:40:33.432919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.388 ms 00:22:06.221 [2024-12-05 19:40:33.432926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.221 [2024-12-05 19:40:33.432961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.221 [2024-12-05 19:40:33.432969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:06.221 [2024-12-05 19:40:33.432977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:06.221 [2024-12-05 19:40:33.432984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.222 [2024-12-05 19:40:33.433003] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:06.222 [2024-12-05 19:40:33.433021] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:06.222 [2024-12-05 19:40:33.433054] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:06.222 [2024-12-05 19:40:33.433069] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:06.222 [2024-12-05 19:40:33.433170] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:06.222 [2024-12-05 19:40:33.433180] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:06.222 [2024-12-05 19:40:33.433190] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:06.222 [2024-12-05 19:40:33.433202] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:06.222 [2024-12-05 19:40:33.433210] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:06.222 [2024-12-05 19:40:33.433219] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:06.222 [2024-12-05 19:40:33.433225] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:06.222 [2024-12-05 19:40:33.433233] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:06.222 [2024-12-05 19:40:33.433239] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:06.222 [2024-12-05 19:40:33.433247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.222 [2024-12-05 19:40:33.433254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:06.222 [2024-12-05 19:40:33.433261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:22:06.222 [2024-12-05 19:40:33.433268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.222 [2024-12-05 19:40:33.433355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.222 [2024-12-05 19:40:33.433365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:06.222 [2024-12-05 19:40:33.433372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:06.222 [2024-12-05 19:40:33.433379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.222 [2024-12-05 19:40:33.433491] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:06.222 [2024-12-05 19:40:33.433501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:06.222 [2024-12-05 19:40:33.433509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.222 [2024-12-05 19:40:33.433516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:06.222 [2024-12-05 19:40:33.433530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:06.222 [2024-12-05 19:40:33.433544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:06.222 [2024-12-05 19:40:33.433551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.222 [2024-12-05 19:40:33.433565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:06.222 [2024-12-05 19:40:33.433577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:06.222 [2024-12-05 19:40:33.433584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.222 [2024-12-05 19:40:33.433591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:06.222 [2024-12-05 19:40:33.433597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:06.222 [2024-12-05 19:40:33.433604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:06.222 [2024-12-05 19:40:33.433617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:06.222 [2024-12-05 19:40:33.433623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:06.222 [2024-12-05 19:40:33.433636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.222 [2024-12-05 19:40:33.433649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:06.222 [2024-12-05 19:40:33.433655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.222 [2024-12-05 19:40:33.433685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:06.222 [2024-12-05 19:40:33.433693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.222 [2024-12-05 19:40:33.433706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:06.222 [2024-12-05 19:40:33.433712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.222 [2024-12-05 19:40:33.433725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:06.222 [2024-12-05 19:40:33.433731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.222 [2024-12-05 19:40:33.433744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:06.222 [2024-12-05 19:40:33.433750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:06.222 [2024-12-05 19:40:33.433756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.222 [2024-12-05 19:40:33.433763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:06.222 [2024-12-05 19:40:33.433769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:06.222 [2024-12-05 19:40:33.433775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:06.222 [2024-12-05 19:40:33.433788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:06.222 [2024-12-05 19:40:33.433796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433802] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:06.222 [2024-12-05 19:40:33.433810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:06.222 [2024-12-05 19:40:33.433819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.222 [2024-12-05 19:40:33.433826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.222 [2024-12-05 19:40:33.433833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:06.222 [2024-12-05 19:40:33.433840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:06.222 [2024-12-05 19:40:33.433847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:06.222 [2024-12-05 19:40:33.433854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:06.222 [2024-12-05 19:40:33.433860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:06.222 [2024-12-05 19:40:33.433866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:06.222 [2024-12-05 19:40:33.433874] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:06.222 [2024-12-05 19:40:33.433882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.222 [2024-12-05 19:40:33.433891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:06.222 [2024-12-05 19:40:33.433898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:06.222 [2024-12-05 19:40:33.433905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:06.222 [2024-12-05 19:40:33.433911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:06.222 [2024-12-05 19:40:33.433918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:06.222 [2024-12-05 19:40:33.433925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:06.222 [2024-12-05 19:40:33.433932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:06.222 [2024-12-05 19:40:33.433939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:06.222 [2024-12-05 19:40:33.433946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:06.222 [2024-12-05 19:40:33.433953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:06.222 [2024-12-05 19:40:33.433960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:06.222 [2024-12-05 19:40:33.433967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:06.222 [2024-12-05 19:40:33.433973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:06.222 [2024-12-05 19:40:33.433980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:06.222 [2024-12-05 19:40:33.433987] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:06.222 [2024-12-05 19:40:33.433995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.222 [2024-12-05 19:40:33.434004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:06.222 [2024-12-05 19:40:33.434011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:06.222 [2024-12-05 19:40:33.434019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:06.222 [2024-12-05 19:40:33.434027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:06.222 [2024-12-05 19:40:33.434034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.223 [2024-12-05 19:40:33.434044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:06.223 [2024-12-05 19:40:33.434051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:22:06.223 [2024-12-05 19:40:33.434058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.223 [2024-12-05 19:40:33.459525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.223 [2024-12-05 19:40:33.459559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.223 [2024-12-05 19:40:33.459569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.415 ms 00:22:06.223 [2024-12-05 19:40:33.459577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.223 [2024-12-05 19:40:33.459711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.223 [2024-12-05 19:40:33.459721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:06.223 [2024-12-05 19:40:33.459730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:22:06.223 [2024-12-05 19:40:33.459737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.482 [2024-12-05 19:40:33.506097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.482 [2024-12-05 19:40:33.506136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.482 [2024-12-05 19:40:33.506151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.340 ms 00:22:06.482 [2024-12-05 19:40:33.506159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.482 [2024-12-05 19:40:33.506248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.482 [2024-12-05 19:40:33.506259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.482 [2024-12-05 19:40:33.506268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:06.482 [2024-12-05 19:40:33.506275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.482 [2024-12-05 19:40:33.506591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.482 [2024-12-05 19:40:33.506605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.482 [2024-12-05 19:40:33.506620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:22:06.482 [2024-12-05 19:40:33.506627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.482 [2024-12-05 19:40:33.506771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.482 [2024-12-05 19:40:33.506781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.482 [2024-12-05 19:40:33.506790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:22:06.482 [2024-12-05 19:40:33.506798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.482 [2024-12-05 19:40:33.519988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.482 [2024-12-05 19:40:33.520126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.482 [2024-12-05 19:40:33.520143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.170 ms 00:22:06.482 [2024-12-05 19:40:33.520151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.482 [2024-12-05 19:40:33.532868] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:06.482 [2024-12-05 19:40:33.532903] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:06.482 [2024-12-05 19:40:33.532914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.482 [2024-12-05 19:40:33.532922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:06.482 [2024-12-05 19:40:33.532931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.669 ms 00:22:06.482 [2024-12-05 19:40:33.532938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.482 [2024-12-05 19:40:33.557099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.482 [2024-12-05 19:40:33.557136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:06.482 [2024-12-05 19:40:33.557148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.092 ms 00:22:06.482 [2024-12-05 19:40:33.557156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.482 [2024-12-05 19:40:33.568867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.483 [2024-12-05 19:40:33.568897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:06.483 [2024-12-05 19:40:33.568907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.644 ms 00:22:06.483 [2024-12-05 19:40:33.568914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.483 [2024-12-05 19:40:33.580731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.483 [2024-12-05 19:40:33.580760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:06.483 [2024-12-05 19:40:33.580771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.757 ms 00:22:06.483 [2024-12-05 19:40:33.580779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.483 [2024-12-05 19:40:33.581393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.483 [2024-12-05 19:40:33.581418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:06.483 [2024-12-05 19:40:33.581427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:22:06.483 [2024-12-05 19:40:33.581435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.483 [2024-12-05 19:40:33.636117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.483 [2024-12-05 19:40:33.636169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:06.483 [2024-12-05 19:40:33.636183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.658 ms 00:22:06.483 [2024-12-05 19:40:33.636191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.483 [2024-12-05 19:40:33.646457] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:06.483 [2024-12-05 19:40:33.660363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.483 [2024-12-05 19:40:33.660401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:06.483 [2024-12-05 19:40:33.660413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.078 ms 00:22:06.483 [2024-12-05 19:40:33.660421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.483 [2024-12-05 19:40:33.660506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.483 [2024-12-05 19:40:33.660517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:06.483 [2024-12-05 19:40:33.660526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:06.483 [2024-12-05 19:40:33.660533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.483 [2024-12-05 19:40:33.660576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.483 [2024-12-05 19:40:33.660585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:06.483 [2024-12-05 19:40:33.660593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:06.483 [2024-12-05 19:40:33.660600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.483 [2024-12-05 19:40:33.660632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.483 [2024-12-05 19:40:33.660643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:06.483 [2024-12-05 19:40:33.660651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:06.483 [2024-12-05 19:40:33.660658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.483 [2024-12-05 19:40:33.660714] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:06.483 [2024-12-05 19:40:33.660725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.483 [2024-12-05 19:40:33.660733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:06.483 [2024-12-05 19:40:33.660741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:06.483 [2024-12-05 19:40:33.660748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.483 [2024-12-05 19:40:33.684854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.483 [2024-12-05 19:40:33.684890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:06.483 [2024-12-05 19:40:33.684902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.083 ms 00:22:06.483 [2024-12-05 19:40:33.684911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.483 [2024-12-05 19:40:33.684996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.483 [2024-12-05 19:40:33.685006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:06.483 [2024-12-05 19:40:33.685015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:06.483 [2024-12-05 19:40:33.685022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.483 [2024-12-05 19:40:33.686293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:06.483 [2024-12-05 19:40:33.689397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 279.348 ms, result 0 00:22:06.483 [2024-12-05 19:40:33.690622] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:06.483 [2024-12-05 19:40:33.703769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:07.865  [2024-12-05T19:40:36.060Z] Copying: 15/256 [MB] (15 MBps) [2024-12-05T19:40:37.004Z] Copying: 29/256 [MB] (13 MBps) [2024-12-05T19:40:37.948Z] Copying: 48/256 [MB] (19 MBps) [2024-12-05T19:40:38.893Z] Copying: 59256/262144 [kB] (9252 kBps) [2024-12-05T19:40:39.835Z] Copying: 75/256 [MB] (17 MBps) [2024-12-05T19:40:40.779Z] Copying: 88/256 [MB] (12 MBps) [2024-12-05T19:40:41.729Z] Copying: 106/256 [MB] (18 MBps) [2024-12-05T19:40:43.128Z] Copying: 125/256 [MB] (18 MBps) [2024-12-05T19:40:44.073Z] Copying: 142/256 [MB] (17 MBps) [2024-12-05T19:40:45.018Z] Copying: 153/256 [MB] (11 MBps) [2024-12-05T19:40:45.968Z] Copying: 168/256 [MB] (15 MBps) [2024-12-05T19:40:46.911Z] Copying: 180/256 [MB] (11 MBps) [2024-12-05T19:40:47.853Z] Copying: 195052/262144 [kB] (9900 kBps) [2024-12-05T19:40:48.796Z] Copying: 209/256 [MB] (18 MBps) [2024-12-05T19:40:49.739Z] Copying: 221/256 [MB] (11 MBps) [2024-12-05T19:40:51.179Z] Copying: 235/256 [MB] (14 MBps) [2024-12-05T19:40:51.179Z] Copying: 255/256 [MB] (19 MBps) [2024-12-05T19:40:51.179Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-05 19:40:50.736964] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:23.924 [2024-12-05 19:40:50.746272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.924 [2024-12-05 19:40:50.746309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:23.924 [2024-12-05 19:40:50.746322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:23.924 [2024-12-05 19:40:50.746334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.924 [2024-12-05 19:40:50.746355] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:23.924 [2024-12-05 19:40:50.748989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.924 [2024-12-05 19:40:50.749021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:23.924 [2024-12-05 19:40:50.749032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.621 ms 00:22:23.924 [2024-12-05 19:40:50.749040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.924 [2024-12-05 19:40:50.750663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.924 [2024-12-05 19:40:50.750714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:23.924 [2024-12-05 19:40:50.750730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.598 ms 00:22:23.924 [2024-12-05 19:40:50.750741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.924 [2024-12-05 19:40:50.758057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.924 [2024-12-05 19:40:50.758103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:23.924 [2024-12-05 19:40:50.758112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.292 ms 00:22:23.924 [2024-12-05 19:40:50.758120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.924 [2024-12-05 19:40:50.765078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.924 [2024-12-05 19:40:50.765106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:23.924 [2024-12-05 19:40:50.765114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.916 ms 00:22:23.924 [2024-12-05 19:40:50.765122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.924 [2024-12-05 19:40:50.788713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.924 [2024-12-05 19:40:50.788749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:23.924 [2024-12-05 19:40:50.788761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.552 ms 00:22:23.924 [2024-12-05 19:40:50.788768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.924 [2024-12-05 19:40:50.802937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.924 [2024-12-05 19:40:50.802975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:23.924 [2024-12-05 19:40:50.802988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.126 ms 00:22:23.924 [2024-12-05 19:40:50.802996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.924 [2024-12-05 19:40:50.803128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.924 [2024-12-05 19:40:50.803138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:23.924 [2024-12-05 19:40:50.803147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:23.924 [2024-12-05 19:40:50.803162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.924 [2024-12-05 19:40:50.826554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.924 [2024-12-05 19:40:50.826586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:23.924 [2024-12-05 19:40:50.826596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.377 ms 00:22:23.924 [2024-12-05 19:40:50.826603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.924 [2024-12-05 19:40:50.850693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.924 [2024-12-05 19:40:50.850724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:23.924 [2024-12-05 19:40:50.850735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.055 ms 00:22:23.924 [2024-12-05 19:40:50.850743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.924 [2024-12-05 19:40:50.873839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.924 [2024-12-05 19:40:50.873869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:23.924 [2024-12-05 19:40:50.873879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.046 ms 00:22:23.924 [2024-12-05 19:40:50.873887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.924 [2024-12-05 19:40:50.896594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.924 [2024-12-05 19:40:50.896744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:23.924 [2024-12-05 19:40:50.896761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.649 ms 00:22:23.924 [2024-12-05 19:40:50.896768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.924 [2024-12-05 19:40:50.896806] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:23.924 [2024-12-05 19:40:50.896821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.896993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.897000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.897007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.897015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.897024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.897031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.897038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.897048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.897055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.897063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.897070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:23.924 [2024-12-05 19:40:50.897077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:23.925 [2024-12-05 19:40:50.897571] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:23.925 [2024-12-05 19:40:50.897578] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e1e31a5-869f-42dd-82c7-3d82fe790364 00:22:23.925 [2024-12-05 19:40:50.897586] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:23.925 [2024-12-05 19:40:50.897593] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:23.925 [2024-12-05 19:40:50.897600] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:23.925 [2024-12-05 19:40:50.897608] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:23.925 [2024-12-05 19:40:50.897614] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:23.925 [2024-12-05 19:40:50.897622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:23.925 [2024-12-05 19:40:50.897629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:23.925 [2024-12-05 19:40:50.897635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:23.925 [2024-12-05 19:40:50.897641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:23.925 [2024-12-05 19:40:50.897648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.926 [2024-12-05 19:40:50.897658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:23.926 [2024-12-05 19:40:50.897666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:22:23.926 [2024-12-05 19:40:50.897683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:50.910294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.926 [2024-12-05 19:40:50.910325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:23.926 [2024-12-05 19:40:50.910336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.582 ms 00:22:23.926 [2024-12-05 19:40:50.910343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:50.910727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.926 [2024-12-05 19:40:50.910743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:23.926 [2024-12-05 19:40:50.910757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:22:23.926 [2024-12-05 19:40:50.910768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:50.945850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.926 [2024-12-05 19:40:50.945882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:23.926 [2024-12-05 19:40:50.945893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.926 [2024-12-05 19:40:50.945901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:50.945976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.926 [2024-12-05 19:40:50.945985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:23.926 [2024-12-05 19:40:50.945992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.926 [2024-12-05 19:40:50.946000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:50.946044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.926 [2024-12-05 19:40:50.946053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:23.926 [2024-12-05 19:40:50.946061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.926 [2024-12-05 19:40:50.946068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:50.946084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.926 [2024-12-05 19:40:50.946095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:23.926 [2024-12-05 19:40:50.946102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.926 [2024-12-05 19:40:50.946109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:51.022700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.926 [2024-12-05 19:40:51.022740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:23.926 [2024-12-05 19:40:51.022751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.926 [2024-12-05 19:40:51.022759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:51.085654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.926 [2024-12-05 19:40:51.085717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:23.926 [2024-12-05 19:40:51.085733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.926 [2024-12-05 19:40:51.085744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:51.085803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.926 [2024-12-05 19:40:51.085817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:23.926 [2024-12-05 19:40:51.085829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.926 [2024-12-05 19:40:51.085839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:51.085878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.926 [2024-12-05 19:40:51.085891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:23.926 [2024-12-05 19:40:51.085908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.926 [2024-12-05 19:40:51.085921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:51.086027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.926 [2024-12-05 19:40:51.086042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:23.926 [2024-12-05 19:40:51.086050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.926 [2024-12-05 19:40:51.086058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:51.086089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.926 [2024-12-05 19:40:51.086098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:23.926 [2024-12-05 19:40:51.086105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.926 [2024-12-05 19:40:51.086116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:51.086152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.926 [2024-12-05 19:40:51.086160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:23.926 [2024-12-05 19:40:51.086168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.926 [2024-12-05 19:40:51.086174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:51.086214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.926 [2024-12-05 19:40:51.086223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:23.926 [2024-12-05 19:40:51.086233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.926 [2024-12-05 19:40:51.086240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.926 [2024-12-05 19:40:51.086369] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.106 ms, result 0 00:22:24.864 00:22:24.864 00:22:24.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.864 19:40:52 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=77088 00:22:24.864 19:40:52 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 77088 00:22:24.864 19:40:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77088 ']' 00:22:24.864 19:40:52 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.864 19:40:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.864 19:40:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.864 19:40:52 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.864 19:40:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:24.864 19:40:52 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:25.125 [2024-12-05 19:40:52.120202] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:22:25.125 [2024-12-05 19:40:52.120387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77088 ] 00:22:25.125 [2024-12-05 19:40:52.284225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.386 [2024-12-05 19:40:52.383348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.957 19:40:52 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.957 19:40:52 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:25.957 19:40:52 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:25.957 [2024-12-05 19:40:53.192047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:25.957 [2024-12-05 19:40:53.192120] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:26.220 [2024-12-05 19:40:53.367355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.220 [2024-12-05 19:40:53.367422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:26.220 [2024-12-05 19:40:53.367443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:26.220 [2024-12-05 19:40:53.367453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.220 [2024-12-05 19:40:53.371262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.220 [2024-12-05 19:40:53.371323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:26.220 [2024-12-05 19:40:53.371337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.785 ms 00:22:26.220 [2024-12-05 19:40:53.371346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.220 [2024-12-05 19:40:53.371494] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:26.220 [2024-12-05 19:40:53.372274] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:26.220 [2024-12-05 19:40:53.372311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.220 [2024-12-05 19:40:53.372321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:26.220 [2024-12-05 19:40:53.372333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:22:26.220 [2024-12-05 19:40:53.372340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.220 [2024-12-05 19:40:53.374128] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:26.220 [2024-12-05 19:40:53.388754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.220 [2024-12-05 19:40:53.388827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:26.220 [2024-12-05 19:40:53.388842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.632 ms 00:22:26.220 [2024-12-05 19:40:53.388852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.220 [2024-12-05 19:40:53.388976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.220 [2024-12-05 19:40:53.388990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:26.220 [2024-12-05 19:40:53.389000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:26.220 [2024-12-05 19:40:53.389010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.220 [2024-12-05 19:40:53.398006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.220 [2024-12-05 19:40:53.398076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:26.220 [2024-12-05 19:40:53.398093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.938 ms 00:22:26.220 [2024-12-05 19:40:53.398109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.220 [2024-12-05 19:40:53.398263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.220 [2024-12-05 19:40:53.398282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:26.220 [2024-12-05 19:40:53.398296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:26.220 [2024-12-05 19:40:53.398317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.220 [2024-12-05 19:40:53.398355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.220 [2024-12-05 19:40:53.398372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:26.220 [2024-12-05 19:40:53.398385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:26.220 [2024-12-05 19:40:53.398402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.220 [2024-12-05 19:40:53.398439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:26.220 [2024-12-05 19:40:53.402969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.220 [2024-12-05 19:40:53.403014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:26.220 [2024-12-05 19:40:53.403029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.537 ms 00:22:26.220 [2024-12-05 19:40:53.403039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.220 [2024-12-05 19:40:53.403132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.220 [2024-12-05 19:40:53.403143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:26.220 [2024-12-05 19:40:53.403155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:26.220 [2024-12-05 19:40:53.403168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.220 [2024-12-05 19:40:53.403193] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:26.220 [2024-12-05 19:40:53.403217] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:26.220 [2024-12-05 19:40:53.403267] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:26.220 [2024-12-05 19:40:53.403286] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:26.220 [2024-12-05 19:40:53.403403] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:26.220 [2024-12-05 19:40:53.403417] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:26.220 [2024-12-05 19:40:53.403434] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:26.221 [2024-12-05 19:40:53.403447] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:26.221 [2024-12-05 19:40:53.403459] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:26.221 [2024-12-05 19:40:53.403469] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:26.221 [2024-12-05 19:40:53.403480] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:26.221 [2024-12-05 19:40:53.403488] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:26.221 [2024-12-05 19:40:53.403501] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:26.221 [2024-12-05 19:40:53.403510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.221 [2024-12-05 19:40:53.403520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:26.221 [2024-12-05 19:40:53.403529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:22:26.221 [2024-12-05 19:40:53.403539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.221 [2024-12-05 19:40:53.403630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.221 [2024-12-05 19:40:53.403642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:26.221 [2024-12-05 19:40:53.403651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:26.221 [2024-12-05 19:40:53.403662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.221 [2024-12-05 19:40:53.403797] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:26.221 [2024-12-05 19:40:53.403812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:26.221 [2024-12-05 19:40:53.403821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:26.221 [2024-12-05 19:40:53.403830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.221 [2024-12-05 19:40:53.403839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:26.221 [2024-12-05 19:40:53.403850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:26.221 [2024-12-05 19:40:53.403858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:26.221 [2024-12-05 19:40:53.403868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:26.221 [2024-12-05 19:40:53.403875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:26.221 [2024-12-05 19:40:53.403884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:26.221 [2024-12-05 19:40:53.403892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:26.221 [2024-12-05 19:40:53.403901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:26.221 [2024-12-05 19:40:53.403907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:26.221 [2024-12-05 19:40:53.403916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:26.221 [2024-12-05 19:40:53.403923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:26.221 [2024-12-05 19:40:53.403931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.221 [2024-12-05 19:40:53.403939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:26.221 [2024-12-05 19:40:53.403948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:26.221 [2024-12-05 19:40:53.403962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.221 [2024-12-05 19:40:53.403971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:26.221 [2024-12-05 19:40:53.403978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:26.221 [2024-12-05 19:40:53.403987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:26.221 [2024-12-05 19:40:53.403994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:26.221 [2024-12-05 19:40:53.404005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:26.221 [2024-12-05 19:40:53.404012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:26.221 [2024-12-05 19:40:53.404020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:26.221 [2024-12-05 19:40:53.404026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:26.221 [2024-12-05 19:40:53.404035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:26.221 [2024-12-05 19:40:53.404041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:26.221 [2024-12-05 19:40:53.404052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:26.221 [2024-12-05 19:40:53.404058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:26.221 [2024-12-05 19:40:53.404067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:26.221 [2024-12-05 19:40:53.404073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:26.221 [2024-12-05 19:40:53.404082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:26.221 [2024-12-05 19:40:53.404088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:26.221 [2024-12-05 19:40:53.404096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:26.221 [2024-12-05 19:40:53.404103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:26.221 [2024-12-05 19:40:53.404113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:26.221 [2024-12-05 19:40:53.404119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:26.221 [2024-12-05 19:40:53.404129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.221 [2024-12-05 19:40:53.404136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:26.221 [2024-12-05 19:40:53.404145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:26.221 [2024-12-05 19:40:53.404152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.221 [2024-12-05 19:40:53.404160] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:26.221 [2024-12-05 19:40:53.404171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:26.221 [2024-12-05 19:40:53.404179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:26.221 [2024-12-05 19:40:53.404187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.221 [2024-12-05 19:40:53.404197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:26.221 [2024-12-05 19:40:53.404204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:26.221 [2024-12-05 19:40:53.404213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:26.221 [2024-12-05 19:40:53.404223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:26.221 [2024-12-05 19:40:53.404232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:26.221 [2024-12-05 19:40:53.404240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:26.221 [2024-12-05 19:40:53.404250] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:26.221 [2024-12-05 19:40:53.404260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:26.221 [2024-12-05 19:40:53.404273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:26.221 [2024-12-05 19:40:53.404281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:26.221 [2024-12-05 19:40:53.404290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:26.221 [2024-12-05 19:40:53.404298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:26.221 [2024-12-05 19:40:53.404314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:26.221 [2024-12-05 19:40:53.404321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:26.221 [2024-12-05 19:40:53.404332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:26.221 [2024-12-05 19:40:53.404339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:26.221 [2024-12-05 19:40:53.404349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:26.221 [2024-12-05 19:40:53.404356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:26.221 [2024-12-05 19:40:53.404367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:26.221 [2024-12-05 19:40:53.404375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:26.221 [2024-12-05 19:40:53.404384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:26.221 [2024-12-05 19:40:53.404392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:26.221 [2024-12-05 19:40:53.404401] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:26.221 [2024-12-05 19:40:53.404409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:26.221 [2024-12-05 19:40:53.404421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:26.221 [2024-12-05 19:40:53.404430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:26.221 [2024-12-05 19:40:53.404439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:26.221 [2024-12-05 19:40:53.404446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:26.221 [2024-12-05 19:40:53.404456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.221 [2024-12-05 19:40:53.404464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:26.221 [2024-12-05 19:40:53.404474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:22:26.221 [2024-12-05 19:40:53.404483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.221 [2024-12-05 19:40:53.437401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.221 [2024-12-05 19:40:53.437462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:26.221 [2024-12-05 19:40:53.437478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.854 ms 00:22:26.221 [2024-12-05 19:40:53.437490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.221 [2024-12-05 19:40:53.437635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.222 [2024-12-05 19:40:53.437646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:26.222 [2024-12-05 19:40:53.437658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:26.222 [2024-12-05 19:40:53.437666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.484 [2024-12-05 19:40:53.473077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.484 [2024-12-05 19:40:53.473131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:26.484 [2024-12-05 19:40:53.473146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.357 ms 00:22:26.484 [2024-12-05 19:40:53.473156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.484 [2024-12-05 19:40:53.473258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.484 [2024-12-05 19:40:53.473269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:26.484 [2024-12-05 19:40:53.473281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:26.484 [2024-12-05 19:40:53.473291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.484 [2024-12-05 19:40:53.473870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.484 [2024-12-05 19:40:53.473896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:26.484 [2024-12-05 19:40:53.473909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:22:26.484 [2024-12-05 19:40:53.473918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.484 [2024-12-05 19:40:53.474075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.484 [2024-12-05 19:40:53.474086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:26.484 [2024-12-05 19:40:53.474098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:22:26.484 [2024-12-05 19:40:53.474108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.484 [2024-12-05 19:40:53.491975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.484 [2024-12-05 19:40:53.492178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:26.484 [2024-12-05 19:40:53.492203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.838 ms 00:22:26.484 [2024-12-05 19:40:53.492213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.484 [2024-12-05 19:40:53.521304] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:26.484 [2024-12-05 19:40:53.521524] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:26.484 [2024-12-05 19:40:53.521553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.484 [2024-12-05 19:40:53.521564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:26.484 [2024-12-05 19:40:53.521577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.207 ms 00:22:26.484 [2024-12-05 19:40:53.521593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.484 [2024-12-05 19:40:53.548416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.484 [2024-12-05 19:40:53.548602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:26.484 [2024-12-05 19:40:53.548630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.665 ms 00:22:26.484 [2024-12-05 19:40:53.548640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.484 [2024-12-05 19:40:53.561942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.484 [2024-12-05 19:40:53.562004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:26.484 [2024-12-05 19:40:53.562022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.181 ms 00:22:26.484 [2024-12-05 19:40:53.562030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.484 [2024-12-05 19:40:53.574883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.484 [2024-12-05 19:40:53.575057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:26.484 [2024-12-05 19:40:53.575082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.752 ms 00:22:26.484 [2024-12-05 19:40:53.575090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.485 [2024-12-05 19:40:53.575816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.485 [2024-12-05 19:40:53.575846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:26.485 [2024-12-05 19:40:53.575859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:22:26.485 [2024-12-05 19:40:53.575867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.485 [2024-12-05 19:40:53.642349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.485 [2024-12-05 19:40:53.642573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:26.485 [2024-12-05 19:40:53.642602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.446 ms 00:22:26.485 [2024-12-05 19:40:53.642612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.485 [2024-12-05 19:40:53.654282] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:26.485 [2024-12-05 19:40:53.674595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.485 [2024-12-05 19:40:53.674665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:26.485 [2024-12-05 19:40:53.674703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.836 ms 00:22:26.485 [2024-12-05 19:40:53.674715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.485 [2024-12-05 19:40:53.674839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.485 [2024-12-05 19:40:53.674854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:26.485 [2024-12-05 19:40:53.674863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:26.485 [2024-12-05 19:40:53.674874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.485 [2024-12-05 19:40:53.674932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.485 [2024-12-05 19:40:53.674944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:26.485 [2024-12-05 19:40:53.674952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:26.485 [2024-12-05 19:40:53.674965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.485 [2024-12-05 19:40:53.674990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.485 [2024-12-05 19:40:53.675002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:26.485 [2024-12-05 19:40:53.675010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:26.485 [2024-12-05 19:40:53.675023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.485 [2024-12-05 19:40:53.675061] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:26.485 [2024-12-05 19:40:53.675076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.485 [2024-12-05 19:40:53.675088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:26.485 [2024-12-05 19:40:53.675098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:26.485 [2024-12-05 19:40:53.675106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.485 [2024-12-05 19:40:53.700581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.485 [2024-12-05 19:40:53.700617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:26.485 [2024-12-05 19:40:53.700631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.420 ms 00:22:26.485 [2024-12-05 19:40:53.700639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.485 [2024-12-05 19:40:53.700747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.485 [2024-12-05 19:40:53.700758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:26.485 [2024-12-05 19:40:53.700769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:26.485 [2024-12-05 19:40:53.700792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.485 [2024-12-05 19:40:53.701589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:26.485 [2024-12-05 19:40:53.704605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 333.963 ms, result 0 00:22:26.485 [2024-12-05 19:40:53.706408] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:26.485 Some configs were skipped because the RPC state that can call them passed over. 00:22:26.746 19:40:53 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:26.746 [2024-12-05 19:40:53.935177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.746 [2024-12-05 19:40:53.935435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:26.746 [2024-12-05 19:40:53.935641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.548 ms 00:22:26.746 [2024-12-05 19:40:53.935721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.746 [2024-12-05 19:40:53.935994] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 4.359 ms, result 0 00:22:26.746 true 00:22:26.746 19:40:53 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:27.007 [2024-12-05 19:40:54.154960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.007 [2024-12-05 19:40:54.155214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:27.007 [2024-12-05 19:40:54.155243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:22:27.007 [2024-12-05 19:40:54.155253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.007 [2024-12-05 19:40:54.155307] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.883 ms, result 0 00:22:27.007 true 00:22:27.007 19:40:54 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 77088 00:22:27.007 19:40:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77088 ']' 00:22:27.007 19:40:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77088 00:22:27.007 19:40:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:27.007 19:40:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.007 19:40:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77088 00:22:27.007 killing process with pid 77088 00:22:27.007 19:40:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:27.007 19:40:54 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:27.007 19:40:54 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77088' 00:22:27.007 19:40:54 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77088 00:22:27.007 19:40:54 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77088 00:22:27.966 [2024-12-05 19:40:54.960051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.966 [2024-12-05 19:40:54.960116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:27.966 [2024-12-05 19:40:54.960130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:27.966 [2024-12-05 19:40:54.960140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.966 [2024-12-05 19:40:54.960165] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:27.966 [2024-12-05 19:40:54.963003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.966 [2024-12-05 19:40:54.963160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:27.966 [2024-12-05 19:40:54.963187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.817 ms 00:22:27.966 [2024-12-05 19:40:54.963195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.966 [2024-12-05 19:40:54.963499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.966 [2024-12-05 19:40:54.963510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:27.966 [2024-12-05 19:40:54.963521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:22:27.966 [2024-12-05 19:40:54.963528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.966 [2024-12-05 19:40:54.968306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.966 [2024-12-05 19:40:54.968343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:27.966 [2024-12-05 19:40:54.968360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.754 ms 00:22:27.966 [2024-12-05 19:40:54.968369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.966 [2024-12-05 19:40:54.975453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.966 [2024-12-05 19:40:54.975581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:27.966 [2024-12-05 19:40:54.975644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.039 ms 00:22:27.966 [2024-12-05 19:40:54.975667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.966 [2024-12-05 19:40:54.986720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.966 [2024-12-05 19:40:54.986868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:27.966 [2024-12-05 19:40:54.986931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.966 ms 00:22:27.966 [2024-12-05 19:40:54.986942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.966 [2024-12-05 19:40:54.994972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.966 [2024-12-05 19:40:54.995020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:27.966 [2024-12-05 19:40:54.995034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.925 ms 00:22:27.966 [2024-12-05 19:40:54.995043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.966 [2024-12-05 19:40:54.995197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.966 [2024-12-05 19:40:54.995208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:27.966 [2024-12-05 19:40:54.995219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:27.966 [2024-12-05 19:40:54.995228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.966 [2024-12-05 19:40:55.006547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.966 [2024-12-05 19:40:55.006708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:27.966 [2024-12-05 19:40:55.006731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.295 ms 00:22:27.966 [2024-12-05 19:40:55.006738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.966 [2024-12-05 19:40:55.017828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.966 [2024-12-05 19:40:55.018034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:27.966 [2024-12-05 19:40:55.018066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.698 ms 00:22:27.966 [2024-12-05 19:40:55.018075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.966 [2024-12-05 19:40:55.028472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.966 [2024-12-05 19:40:55.028519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:27.966 [2024-12-05 19:40:55.028534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.290 ms 00:22:27.966 [2024-12-05 19:40:55.028543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.966 [2024-12-05 19:40:55.038507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.966 [2024-12-05 19:40:55.038547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:27.966 [2024-12-05 19:40:55.038560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.886 ms 00:22:27.966 [2024-12-05 19:40:55.038567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.966 [2024-12-05 19:40:55.038613] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:27.966 [2024-12-05 19:40:55.038630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:27.966 [2024-12-05 19:40:55.038918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.038931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.038945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.038958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.038971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.038983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.038996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:27.967 [2024-12-05 19:40:55.039735] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:27.967 [2024-12-05 19:40:55.039750] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e1e31a5-869f-42dd-82c7-3d82fe790364 00:22:27.967 [2024-12-05 19:40:55.039763] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:27.967 [2024-12-05 19:40:55.039773] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:27.967 [2024-12-05 19:40:55.039781] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:27.967 [2024-12-05 19:40:55.039791] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:27.967 [2024-12-05 19:40:55.039799] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:27.967 [2024-12-05 19:40:55.039809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:27.967 [2024-12-05 19:40:55.039816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:27.967 [2024-12-05 19:40:55.039825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:27.967 [2024-12-05 19:40:55.039833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:27.967 [2024-12-05 19:40:55.039842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.967 [2024-12-05 19:40:55.039850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:27.967 [2024-12-05 19:40:55.039861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:22:27.967 [2024-12-05 19:40:55.039869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.967 [2024-12-05 19:40:55.053625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.967 [2024-12-05 19:40:55.053793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:27.967 [2024-12-05 19:40:55.053865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.708 ms 00:22:27.967 [2024-12-05 19:40:55.053891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.967 [2024-12-05 19:40:55.054322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.967 [2024-12-05 19:40:55.054432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:27.967 [2024-12-05 19:40:55.054631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:22:27.967 [2024-12-05 19:40:55.054685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.967 [2024-12-05 19:40:55.102830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.967 [2024-12-05 19:40:55.103032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:27.967 [2024-12-05 19:40:55.103105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.967 [2024-12-05 19:40:55.103132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.967 [2024-12-05 19:40:55.103291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.967 [2024-12-05 19:40:55.103337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:27.967 [2024-12-05 19:40:55.103368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.967 [2024-12-05 19:40:55.103391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.967 [2024-12-05 19:40:55.103467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.967 [2024-12-05 19:40:55.103651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:27.967 [2024-12-05 19:40:55.103708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.967 [2024-12-05 19:40:55.103733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.967 [2024-12-05 19:40:55.103771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.967 [2024-12-05 19:40:55.103796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:27.967 [2024-12-05 19:40:55.103822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.967 [2024-12-05 19:40:55.103924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.967 [2024-12-05 19:40:55.187618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.967 [2024-12-05 19:40:55.187909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:27.967 [2024-12-05 19:40:55.187938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.967 [2024-12-05 19:40:55.187948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.229 [2024-12-05 19:40:55.257119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.229 [2024-12-05 19:40:55.257192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:28.229 [2024-12-05 19:40:55.257207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.229 [2024-12-05 19:40:55.257221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.230 [2024-12-05 19:40:55.257337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.230 [2024-12-05 19:40:55.257348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:28.230 [2024-12-05 19:40:55.257362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.230 [2024-12-05 19:40:55.257371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.230 [2024-12-05 19:40:55.257407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.230 [2024-12-05 19:40:55.257417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:28.230 [2024-12-05 19:40:55.257427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.230 [2024-12-05 19:40:55.257436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.230 [2024-12-05 19:40:55.257543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.230 [2024-12-05 19:40:55.257554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:28.230 [2024-12-05 19:40:55.257565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.230 [2024-12-05 19:40:55.257573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.230 [2024-12-05 19:40:55.257611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.230 [2024-12-05 19:40:55.257620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:28.230 [2024-12-05 19:40:55.257631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.230 [2024-12-05 19:40:55.257641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.230 [2024-12-05 19:40:55.257726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.230 [2024-12-05 19:40:55.257737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:28.230 [2024-12-05 19:40:55.257750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.230 [2024-12-05 19:40:55.257758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.230 [2024-12-05 19:40:55.257814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.230 [2024-12-05 19:40:55.257826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:28.230 [2024-12-05 19:40:55.257837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.230 [2024-12-05 19:40:55.257845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.230 [2024-12-05 19:40:55.258006] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 297.922 ms, result 0 00:22:28.802 19:40:55 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:28.802 19:40:55 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:29.064 [2024-12-05 19:40:56.059975] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:22:29.064 [2024-12-05 19:40:56.060110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77145 ] 00:22:29.064 [2024-12-05 19:40:56.222237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.326 [2024-12-05 19:40:56.354436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.589 [2024-12-05 19:40:56.657501] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:29.589 [2024-12-05 19:40:56.657602] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:29.589 [2024-12-05 19:40:56.819190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.589 [2024-12-05 19:40:56.819376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:29.589 [2024-12-05 19:40:56.819395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:29.589 [2024-12-05 19:40:56.819404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.589 [2024-12-05 19:40:56.822136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.589 [2024-12-05 19:40:56.822174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.589 [2024-12-05 19:40:56.822185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:22:29.589 [2024-12-05 19:40:56.822194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.589 [2024-12-05 19:40:56.822288] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:29.589 [2024-12-05 19:40:56.823076] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:29.589 [2024-12-05 19:40:56.823174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.589 [2024-12-05 19:40:56.823225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.589 [2024-12-05 19:40:56.823249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:22:29.589 [2024-12-05 19:40:56.823268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.589 [2024-12-05 19:40:56.824412] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:29.589 [2024-12-05 19:40:56.837549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.589 [2024-12-05 19:40:56.837681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:29.589 [2024-12-05 19:40:56.837737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.139 ms 00:22:29.589 [2024-12-05 19:40:56.837760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.589 [2024-12-05 19:40:56.838185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.589 [2024-12-05 19:40:56.838388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:29.589 [2024-12-05 19:40:56.838406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:29.589 [2024-12-05 19:40:56.838414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.851 [2024-12-05 19:40:56.843338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.851 [2024-12-05 19:40:56.843370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.851 [2024-12-05 19:40:56.843379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.871 ms 00:22:29.851 [2024-12-05 19:40:56.843387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.851 [2024-12-05 19:40:56.843472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.851 [2024-12-05 19:40:56.843482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.851 [2024-12-05 19:40:56.843490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:29.851 [2024-12-05 19:40:56.843498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.851 [2024-12-05 19:40:56.843525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.851 [2024-12-05 19:40:56.843533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:29.851 [2024-12-05 19:40:56.843541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:29.851 [2024-12-05 19:40:56.843548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.851 [2024-12-05 19:40:56.843568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:29.851 [2024-12-05 19:40:56.846887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.851 [2024-12-05 19:40:56.846914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.851 [2024-12-05 19:40:56.846924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:22:29.851 [2024-12-05 19:40:56.846933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.851 [2024-12-05 19:40:56.846969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.851 [2024-12-05 19:40:56.846978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:29.851 [2024-12-05 19:40:56.846986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:29.851 [2024-12-05 19:40:56.846993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.851 [2024-12-05 19:40:56.847012] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:29.851 [2024-12-05 19:40:56.847031] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:29.851 [2024-12-05 19:40:56.847065] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:29.851 [2024-12-05 19:40:56.847079] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:29.851 [2024-12-05 19:40:56.847181] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:29.851 [2024-12-05 19:40:56.847192] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:29.851 [2024-12-05 19:40:56.847203] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:29.851 [2024-12-05 19:40:56.847215] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:29.851 [2024-12-05 19:40:56.847224] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:29.851 [2024-12-05 19:40:56.847232] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:29.851 [2024-12-05 19:40:56.847239] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:29.851 [2024-12-05 19:40:56.847246] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:29.851 [2024-12-05 19:40:56.847253] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:29.851 [2024-12-05 19:40:56.847260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.851 [2024-12-05 19:40:56.847267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:29.851 [2024-12-05 19:40:56.847275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:22:29.851 [2024-12-05 19:40:56.847283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.851 [2024-12-05 19:40:56.847369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.851 [2024-12-05 19:40:56.847379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:29.851 [2024-12-05 19:40:56.847387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:29.851 [2024-12-05 19:40:56.847394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.851 [2024-12-05 19:40:56.847506] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:29.851 [2024-12-05 19:40:56.847516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:29.851 [2024-12-05 19:40:56.847524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.851 [2024-12-05 19:40:56.847531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.851 [2024-12-05 19:40:56.847539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:29.851 [2024-12-05 19:40:56.847547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:29.851 [2024-12-05 19:40:56.847554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:29.851 [2024-12-05 19:40:56.847562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:29.851 [2024-12-05 19:40:56.847569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:29.851 [2024-12-05 19:40:56.847575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.851 [2024-12-05 19:40:56.847582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:29.852 [2024-12-05 19:40:56.847595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:29.852 [2024-12-05 19:40:56.847601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.852 [2024-12-05 19:40:56.847608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:29.852 [2024-12-05 19:40:56.847615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:29.852 [2024-12-05 19:40:56.847621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.852 [2024-12-05 19:40:56.847628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:29.852 [2024-12-05 19:40:56.847635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:29.852 [2024-12-05 19:40:56.847641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.852 [2024-12-05 19:40:56.847648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:29.852 [2024-12-05 19:40:56.847655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:29.852 [2024-12-05 19:40:56.847661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.852 [2024-12-05 19:40:56.847685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:29.852 [2024-12-05 19:40:56.847693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:29.852 [2024-12-05 19:40:56.847700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.852 [2024-12-05 19:40:56.847706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:29.852 [2024-12-05 19:40:56.847713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:29.852 [2024-12-05 19:40:56.847720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.852 [2024-12-05 19:40:56.847726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:29.852 [2024-12-05 19:40:56.847733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:29.852 [2024-12-05 19:40:56.847739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.852 [2024-12-05 19:40:56.847746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:29.852 [2024-12-05 19:40:56.847753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:29.852 [2024-12-05 19:40:56.847759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.852 [2024-12-05 19:40:56.847765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:29.852 [2024-12-05 19:40:56.847772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:29.852 [2024-12-05 19:40:56.847778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.852 [2024-12-05 19:40:56.847785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:29.852 [2024-12-05 19:40:56.847792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:29.852 [2024-12-05 19:40:56.847799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.852 [2024-12-05 19:40:56.847805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:29.852 [2024-12-05 19:40:56.847812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:29.852 [2024-12-05 19:40:56.847819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.852 [2024-12-05 19:40:56.847826] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:29.852 [2024-12-05 19:40:56.847833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:29.852 [2024-12-05 19:40:56.847843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.852 [2024-12-05 19:40:56.847850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.852 [2024-12-05 19:40:56.847857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:29.852 [2024-12-05 19:40:56.847864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:29.852 [2024-12-05 19:40:56.847871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:29.852 [2024-12-05 19:40:56.847878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:29.852 [2024-12-05 19:40:56.847885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:29.852 [2024-12-05 19:40:56.847891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:29.852 [2024-12-05 19:40:56.847899] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:29.852 [2024-12-05 19:40:56.847908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.852 [2024-12-05 19:40:56.847916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:29.852 [2024-12-05 19:40:56.847924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:29.852 [2024-12-05 19:40:56.847931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:29.852 [2024-12-05 19:40:56.847938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:29.852 [2024-12-05 19:40:56.847945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:29.852 [2024-12-05 19:40:56.847952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:29.852 [2024-12-05 19:40:56.847959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:29.852 [2024-12-05 19:40:56.847966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:29.852 [2024-12-05 19:40:56.847972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:29.852 [2024-12-05 19:40:56.847979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:29.852 [2024-12-05 19:40:56.847986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:29.852 [2024-12-05 19:40:56.847994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:29.852 [2024-12-05 19:40:56.848001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:29.852 [2024-12-05 19:40:56.848008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:29.852 [2024-12-05 19:40:56.848015] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:29.852 [2024-12-05 19:40:56.848022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.852 [2024-12-05 19:40:56.848031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:29.852 [2024-12-05 19:40:56.848038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:29.852 [2024-12-05 19:40:56.848045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:29.852 [2024-12-05 19:40:56.848052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:29.852 [2024-12-05 19:40:56.848059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.852 [2024-12-05 19:40:56.848070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:29.852 [2024-12-05 19:40:56.848077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:22:29.852 [2024-12-05 19:40:56.848084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.852 [2024-12-05 19:40:56.873818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.852 [2024-12-05 19:40:56.873851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:29.852 [2024-12-05 19:40:56.873861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.666 ms 00:22:29.852 [2024-12-05 19:40:56.873868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.852 [2024-12-05 19:40:56.873984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.852 [2024-12-05 19:40:56.873994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:29.852 [2024-12-05 19:40:56.874002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:29.852 [2024-12-05 19:40:56.874009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.852 [2024-12-05 19:40:56.923479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.852 [2024-12-05 19:40:56.923516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:29.852 [2024-12-05 19:40:56.923531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.450 ms 00:22:29.852 [2024-12-05 19:40:56.923538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.852 [2024-12-05 19:40:56.923626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.852 [2024-12-05 19:40:56.923638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:29.852 [2024-12-05 19:40:56.923647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:29.852 [2024-12-05 19:40:56.923655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.852 [2024-12-05 19:40:56.923991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.852 [2024-12-05 19:40:56.924007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:29.852 [2024-12-05 19:40:56.924022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:22:29.852 [2024-12-05 19:40:56.924029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.852 [2024-12-05 19:40:56.924154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.852 [2024-12-05 19:40:56.924163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:29.852 [2024-12-05 19:40:56.924170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:29.852 [2024-12-05 19:40:56.924177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.852 [2024-12-05 19:40:56.937503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.852 [2024-12-05 19:40:56.937535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:29.852 [2024-12-05 19:40:56.937545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.307 ms 00:22:29.852 [2024-12-05 19:40:56.937553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.852 [2024-12-05 19:40:56.950361] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:29.852 [2024-12-05 19:40:56.950396] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:29.852 [2024-12-05 19:40:56.950407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.852 [2024-12-05 19:40:56.950415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:29.852 [2024-12-05 19:40:56.950424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.763 ms 00:22:29.853 [2024-12-05 19:40:56.950431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.853 [2024-12-05 19:40:56.974703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.853 [2024-12-05 19:40:56.974737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:29.853 [2024-12-05 19:40:56.974748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.200 ms 00:22:29.853 [2024-12-05 19:40:56.974757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.853 [2024-12-05 19:40:56.986504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.853 [2024-12-05 19:40:56.986535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:29.853 [2024-12-05 19:40:56.986545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.677 ms 00:22:29.853 [2024-12-05 19:40:56.986552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.853 [2024-12-05 19:40:56.998071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.853 [2024-12-05 19:40:56.998100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:29.853 [2024-12-05 19:40:56.998112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.456 ms 00:22:29.853 [2024-12-05 19:40:56.998119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.853 [2024-12-05 19:40:56.998732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.853 [2024-12-05 19:40:56.998751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:29.853 [2024-12-05 19:40:56.998760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:22:29.853 [2024-12-05 19:40:56.998768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.853 [2024-12-05 19:40:57.054121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.853 [2024-12-05 19:40:57.054168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:29.853 [2024-12-05 19:40:57.054181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.329 ms 00:22:29.853 [2024-12-05 19:40:57.054190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.853 [2024-12-05 19:40:57.064611] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:29.853 [2024-12-05 19:40:57.078645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.853 [2024-12-05 19:40:57.078696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:29.853 [2024-12-05 19:40:57.078708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.346 ms 00:22:29.853 [2024-12-05 19:40:57.078720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.853 [2024-12-05 19:40:57.078803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.853 [2024-12-05 19:40:57.078814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:29.853 [2024-12-05 19:40:57.078823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:29.853 [2024-12-05 19:40:57.078830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.853 [2024-12-05 19:40:57.078874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.853 [2024-12-05 19:40:57.078882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:29.853 [2024-12-05 19:40:57.078890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:29.853 [2024-12-05 19:40:57.078900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.853 [2024-12-05 19:40:57.078930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.853 [2024-12-05 19:40:57.078939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:29.853 [2024-12-05 19:40:57.078948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:29.853 [2024-12-05 19:40:57.078955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.853 [2024-12-05 19:40:57.078984] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:29.853 [2024-12-05 19:40:57.078993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.853 [2024-12-05 19:40:57.079000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:29.853 [2024-12-05 19:40:57.079007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:29.853 [2024-12-05 19:40:57.079014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.129 [2024-12-05 19:40:57.102967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.129 [2024-12-05 19:40:57.103003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:30.129 [2024-12-05 19:40:57.103015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.931 ms 00:22:30.129 [2024-12-05 19:40:57.103024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.129 [2024-12-05 19:40:57.103115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.129 [2024-12-05 19:40:57.103126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:30.130 [2024-12-05 19:40:57.103134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:30.130 [2024-12-05 19:40:57.103142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.130 [2024-12-05 19:40:57.104618] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:30.130 [2024-12-05 19:40:57.107660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 285.155 ms, result 0 00:22:30.130 [2024-12-05 19:40:57.108907] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:30.130 [2024-12-05 19:40:57.121944] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:31.071  [2024-12-05T19:40:59.270Z] Copying: 14/256 [MB] (14 MBps) [2024-12-05T19:41:00.227Z] Copying: 25/256 [MB] (10 MBps) [2024-12-05T19:41:01.170Z] Copying: 35/256 [MB] (10 MBps) [2024-12-05T19:41:02.555Z] Copying: 46/256 [MB] (10 MBps) [2024-12-05T19:41:03.129Z] Copying: 58/256 [MB] (11 MBps) [2024-12-05T19:41:04.514Z] Copying: 70/256 [MB] (11 MBps) [2024-12-05T19:41:05.455Z] Copying: 82/256 [MB] (11 MBps) [2024-12-05T19:41:06.465Z] Copying: 94264/262144 [kB] (10012 kBps) [2024-12-05T19:41:07.409Z] Copying: 103/256 [MB] (11 MBps) [2024-12-05T19:41:08.401Z] Copying: 114/256 [MB] (11 MBps) [2024-12-05T19:41:09.344Z] Copying: 125/256 [MB] (11 MBps) [2024-12-05T19:41:10.286Z] Copying: 137/256 [MB] (11 MBps) [2024-12-05T19:41:11.228Z] Copying: 147/256 [MB] (10 MBps) [2024-12-05T19:41:12.169Z] Copying: 161552/262144 [kB] (10176 kBps) [2024-12-05T19:41:13.550Z] Copying: 167/256 [MB] (10 MBps) [2024-12-05T19:41:14.490Z] Copying: 180/256 [MB] (12 MBps) [2024-12-05T19:41:15.454Z] Copying: 191/256 [MB] (11 MBps) [2024-12-05T19:41:16.393Z] Copying: 205/256 [MB] (13 MBps) [2024-12-05T19:41:17.161Z] Copying: 215/256 [MB] (10 MBps) [2024-12-05T19:41:18.550Z] Copying: 230708/262144 [kB] (10188 kBps) [2024-12-05T19:41:19.540Z] Copying: 240848/262144 [kB] (10140 kBps) [2024-12-05T19:41:20.483Z] Copying: 250644/262144 [kB] (9796 kBps) [2024-12-05T19:41:20.483Z] Copying: 260440/262144 [kB] (9796 kBps) [2024-12-05T19:41:20.483Z] Copying: 256/256 [MB] (average 11 MBps)[2024-12-05 19:41:20.299367] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:53.228 [2024-12-05 19:41:20.308646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.228 [2024-12-05 19:41:20.308689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:53.228 [2024-12-05 19:41:20.308711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:53.228 [2024-12-05 19:41:20.308720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.228 [2024-12-05 19:41:20.308741] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:53.228 [2024-12-05 19:41:20.311336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.228 [2024-12-05 19:41:20.311462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:53.229 [2024-12-05 19:41:20.311479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.582 ms 00:22:53.229 [2024-12-05 19:41:20.311487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.229 [2024-12-05 19:41:20.311761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.229 [2024-12-05 19:41:20.311772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:53.229 [2024-12-05 19:41:20.311781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:22:53.229 [2024-12-05 19:41:20.311789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.229 [2024-12-05 19:41:20.315478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.229 [2024-12-05 19:41:20.315497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:53.229 [2024-12-05 19:41:20.315507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.669 ms 00:22:53.229 [2024-12-05 19:41:20.315515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.229 [2024-12-05 19:41:20.322416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.229 [2024-12-05 19:41:20.322527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:53.229 [2024-12-05 19:41:20.322542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.884 ms 00:22:53.229 [2024-12-05 19:41:20.322549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.229 [2024-12-05 19:41:20.347065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.229 [2024-12-05 19:41:20.347105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:53.229 [2024-12-05 19:41:20.347116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.458 ms 00:22:53.229 [2024-12-05 19:41:20.347124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.229 [2024-12-05 19:41:20.361484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.229 [2024-12-05 19:41:20.361639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:53.229 [2024-12-05 19:41:20.361662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.320 ms 00:22:53.229 [2024-12-05 19:41:20.361692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.229 [2024-12-05 19:41:20.361847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.229 [2024-12-05 19:41:20.361858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:53.229 [2024-12-05 19:41:20.361874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:53.229 [2024-12-05 19:41:20.361881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.229 [2024-12-05 19:41:20.385756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.229 [2024-12-05 19:41:20.385798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:53.229 [2024-12-05 19:41:20.385810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.857 ms 00:22:53.229 [2024-12-05 19:41:20.385818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.229 [2024-12-05 19:41:20.409249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.229 [2024-12-05 19:41:20.409280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:53.229 [2024-12-05 19:41:20.409291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.394 ms 00:22:53.229 [2024-12-05 19:41:20.409299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.229 [2024-12-05 19:41:20.432369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.229 [2024-12-05 19:41:20.432495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:53.229 [2024-12-05 19:41:20.432510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.035 ms 00:22:53.229 [2024-12-05 19:41:20.432517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.229 [2024-12-05 19:41:20.455892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.229 [2024-12-05 19:41:20.455928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:53.229 [2024-12-05 19:41:20.455940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.315 ms 00:22:53.229 [2024-12-05 19:41:20.455947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.229 [2024-12-05 19:41:20.455984] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:53.229 [2024-12-05 19:41:20.455999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:53.229 [2024-12-05 19:41:20.456409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:53.230 [2024-12-05 19:41:20.456804] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:53.230 [2024-12-05 19:41:20.456811] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e1e31a5-869f-42dd-82c7-3d82fe790364 00:22:53.230 [2024-12-05 19:41:20.456819] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:53.230 [2024-12-05 19:41:20.456826] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:53.230 [2024-12-05 19:41:20.456833] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:53.230 [2024-12-05 19:41:20.456841] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:53.230 [2024-12-05 19:41:20.456848] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:53.230 [2024-12-05 19:41:20.456856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:53.230 [2024-12-05 19:41:20.456865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:53.230 [2024-12-05 19:41:20.456871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:53.230 [2024-12-05 19:41:20.456878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:53.230 [2024-12-05 19:41:20.456885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.230 [2024-12-05 19:41:20.456892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:53.230 [2024-12-05 19:41:20.456901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:22:53.230 [2024-12-05 19:41:20.456908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.230 [2024-12-05 19:41:20.469469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.230 [2024-12-05 19:41:20.469501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:53.230 [2024-12-05 19:41:20.469516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.529 ms 00:22:53.230 [2024-12-05 19:41:20.469525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.230 [2024-12-05 19:41:20.469914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.230 [2024-12-05 19:41:20.469929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:53.230 [2024-12-05 19:41:20.469938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:22:53.230 [2024-12-05 19:41:20.469945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.491 [2024-12-05 19:41:20.504340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.491 [2024-12-05 19:41:20.504383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.491 [2024-12-05 19:41:20.504395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.491 [2024-12-05 19:41:20.504407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.491 [2024-12-05 19:41:20.504488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.491 [2024-12-05 19:41:20.504498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.491 [2024-12-05 19:41:20.504507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.491 [2024-12-05 19:41:20.504516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.491 [2024-12-05 19:41:20.504559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.491 [2024-12-05 19:41:20.504569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.491 [2024-12-05 19:41:20.504577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.491 [2024-12-05 19:41:20.504586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.491 [2024-12-05 19:41:20.504606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.491 [2024-12-05 19:41:20.504614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.491 [2024-12-05 19:41:20.504623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.491 [2024-12-05 19:41:20.504631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.491 [2024-12-05 19:41:20.580809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.491 [2024-12-05 19:41:20.580857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.491 [2024-12-05 19:41:20.580868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.491 [2024-12-05 19:41:20.580875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.491 [2024-12-05 19:41:20.642809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.491 [2024-12-05 19:41:20.642996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.491 [2024-12-05 19:41:20.643012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.491 [2024-12-05 19:41:20.643021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.491 [2024-12-05 19:41:20.643076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.491 [2024-12-05 19:41:20.643085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:53.491 [2024-12-05 19:41:20.643093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.491 [2024-12-05 19:41:20.643100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.492 [2024-12-05 19:41:20.643128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.492 [2024-12-05 19:41:20.643141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:53.492 [2024-12-05 19:41:20.643149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.492 [2024-12-05 19:41:20.643156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.492 [2024-12-05 19:41:20.643250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.492 [2024-12-05 19:41:20.643259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:53.492 [2024-12-05 19:41:20.643268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.492 [2024-12-05 19:41:20.643275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.492 [2024-12-05 19:41:20.643305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.492 [2024-12-05 19:41:20.643313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:53.492 [2024-12-05 19:41:20.643323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.492 [2024-12-05 19:41:20.643331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.492 [2024-12-05 19:41:20.643365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.492 [2024-12-05 19:41:20.643374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:53.492 [2024-12-05 19:41:20.643382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.492 [2024-12-05 19:41:20.643389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.492 [2024-12-05 19:41:20.643432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.492 [2024-12-05 19:41:20.643444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:53.492 [2024-12-05 19:41:20.643452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.492 [2024-12-05 19:41:20.643460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.492 [2024-12-05 19:41:20.643588] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 334.936 ms, result 0 00:22:54.434 00:22:54.434 00:22:54.434 19:41:21 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:54.434 19:41:21 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:55.004 19:41:21 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:55.004 [2024-12-05 19:41:22.048796] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:22:55.004 [2024-12-05 19:41:22.048919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77411 ] 00:22:55.004 [2024-12-05 19:41:22.209213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.263 [2024-12-05 19:41:22.311596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.525 [2024-12-05 19:41:22.570948] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:55.525 [2024-12-05 19:41:22.571179] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:55.525 [2024-12-05 19:41:22.730963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.525 [2024-12-05 19:41:22.731014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:55.525 [2024-12-05 19:41:22.731028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:55.525 [2024-12-05 19:41:22.731036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.525 [2024-12-05 19:41:22.734037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.525 [2024-12-05 19:41:22.734181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.525 [2024-12-05 19:41:22.734198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.982 ms 00:22:55.525 [2024-12-05 19:41:22.734208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.525 [2024-12-05 19:41:22.734307] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:55.525 [2024-12-05 19:41:22.735077] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:55.525 [2024-12-05 19:41:22.735106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.525 [2024-12-05 19:41:22.735115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.525 [2024-12-05 19:41:22.735123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:22:55.525 [2024-12-05 19:41:22.735131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.525 [2024-12-05 19:41:22.736286] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:55.525 [2024-12-05 19:41:22.749167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.525 [2024-12-05 19:41:22.749298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:55.525 [2024-12-05 19:41:22.749315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.881 ms 00:22:55.525 [2024-12-05 19:41:22.749323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.525 [2024-12-05 19:41:22.749409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.525 [2024-12-05 19:41:22.749420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:55.525 [2024-12-05 19:41:22.749429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:55.525 [2024-12-05 19:41:22.749436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.525 [2024-12-05 19:41:22.754440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.525 [2024-12-05 19:41:22.754469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:55.525 [2024-12-05 19:41:22.754480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.962 ms 00:22:55.525 [2024-12-05 19:41:22.754487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.525 [2024-12-05 19:41:22.754576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.525 [2024-12-05 19:41:22.754586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:55.525 [2024-12-05 19:41:22.754593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:55.525 [2024-12-05 19:41:22.754601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.525 [2024-12-05 19:41:22.754628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.525 [2024-12-05 19:41:22.754636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:55.525 [2024-12-05 19:41:22.754644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:55.525 [2024-12-05 19:41:22.754651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.525 [2024-12-05 19:41:22.754687] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:55.525 [2024-12-05 19:41:22.757915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.525 [2024-12-05 19:41:22.757942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:55.525 [2024-12-05 19:41:22.757951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.249 ms 00:22:55.525 [2024-12-05 19:41:22.757958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.525 [2024-12-05 19:41:22.757998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.525 [2024-12-05 19:41:22.758007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:55.525 [2024-12-05 19:41:22.758015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:55.525 [2024-12-05 19:41:22.758022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.525 [2024-12-05 19:41:22.758043] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:55.525 [2024-12-05 19:41:22.758062] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:55.525 [2024-12-05 19:41:22.758096] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:55.525 [2024-12-05 19:41:22.758111] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:55.525 [2024-12-05 19:41:22.758213] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:55.525 [2024-12-05 19:41:22.758222] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:55.525 [2024-12-05 19:41:22.758232] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:55.525 [2024-12-05 19:41:22.758244] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:55.525 [2024-12-05 19:41:22.758253] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:55.525 [2024-12-05 19:41:22.758261] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:55.525 [2024-12-05 19:41:22.758268] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:55.525 [2024-12-05 19:41:22.758275] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:55.525 [2024-12-05 19:41:22.758282] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:55.525 [2024-12-05 19:41:22.758289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.525 [2024-12-05 19:41:22.758296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:55.525 [2024-12-05 19:41:22.758303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:22:55.525 [2024-12-05 19:41:22.758310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.525 [2024-12-05 19:41:22.758396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.525 [2024-12-05 19:41:22.758407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:55.525 [2024-12-05 19:41:22.758414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:55.525 [2024-12-05 19:41:22.758421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.525 [2024-12-05 19:41:22.758523] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:55.525 [2024-12-05 19:41:22.758533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:55.525 [2024-12-05 19:41:22.758540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:55.525 [2024-12-05 19:41:22.758548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.525 [2024-12-05 19:41:22.758556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:55.525 [2024-12-05 19:41:22.758562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:55.525 [2024-12-05 19:41:22.758569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:55.525 [2024-12-05 19:41:22.758577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:55.525 [2024-12-05 19:41:22.758583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:55.525 [2024-12-05 19:41:22.758590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:55.525 [2024-12-05 19:41:22.758596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:55.525 [2024-12-05 19:41:22.758610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:55.525 [2024-12-05 19:41:22.758617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:55.525 [2024-12-05 19:41:22.758624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:55.525 [2024-12-05 19:41:22.758630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:55.525 [2024-12-05 19:41:22.758636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.525 [2024-12-05 19:41:22.758643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:55.525 [2024-12-05 19:41:22.758649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:55.525 [2024-12-05 19:41:22.758655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.525 [2024-12-05 19:41:22.758662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:55.525 [2024-12-05 19:41:22.758686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:55.525 [2024-12-05 19:41:22.758693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.525 [2024-12-05 19:41:22.758700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:55.525 [2024-12-05 19:41:22.758707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:55.525 [2024-12-05 19:41:22.758713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.525 [2024-12-05 19:41:22.758719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:55.525 [2024-12-05 19:41:22.758726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:55.525 [2024-12-05 19:41:22.758732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.525 [2024-12-05 19:41:22.758739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:55.525 [2024-12-05 19:41:22.758746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:55.525 [2024-12-05 19:41:22.758752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.525 [2024-12-05 19:41:22.758759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:55.525 [2024-12-05 19:41:22.758765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:55.525 [2024-12-05 19:41:22.758771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:55.525 [2024-12-05 19:41:22.758778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:55.525 [2024-12-05 19:41:22.758784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:55.525 [2024-12-05 19:41:22.758790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:55.525 [2024-12-05 19:41:22.758797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:55.525 [2024-12-05 19:41:22.758803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:55.525 [2024-12-05 19:41:22.758810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.525 [2024-12-05 19:41:22.758816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:55.525 [2024-12-05 19:41:22.758823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:55.525 [2024-12-05 19:41:22.758830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.525 [2024-12-05 19:41:22.758838] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:55.526 [2024-12-05 19:41:22.758846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:55.526 [2024-12-05 19:41:22.758857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:55.526 [2024-12-05 19:41:22.758864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.526 [2024-12-05 19:41:22.758872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:55.526 [2024-12-05 19:41:22.758879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:55.526 [2024-12-05 19:41:22.758885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:55.526 [2024-12-05 19:41:22.758892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:55.526 [2024-12-05 19:41:22.758898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:55.526 [2024-12-05 19:41:22.758905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:55.526 [2024-12-05 19:41:22.758913] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:55.526 [2024-12-05 19:41:22.758921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:55.526 [2024-12-05 19:41:22.758929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:55.526 [2024-12-05 19:41:22.758936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:55.526 [2024-12-05 19:41:22.758943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:55.526 [2024-12-05 19:41:22.758950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:55.526 [2024-12-05 19:41:22.758957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:55.526 [2024-12-05 19:41:22.758964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:55.526 [2024-12-05 19:41:22.758970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:55.526 [2024-12-05 19:41:22.758977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:55.526 [2024-12-05 19:41:22.758984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:55.526 [2024-12-05 19:41:22.758991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:55.526 [2024-12-05 19:41:22.758999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:55.526 [2024-12-05 19:41:22.759006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:55.526 [2024-12-05 19:41:22.759013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:55.526 [2024-12-05 19:41:22.759041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:55.526 [2024-12-05 19:41:22.759048] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:55.526 [2024-12-05 19:41:22.759056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:55.526 [2024-12-05 19:41:22.759063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:55.526 [2024-12-05 19:41:22.759072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:55.526 [2024-12-05 19:41:22.759079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:55.526 [2024-12-05 19:41:22.759086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:55.526 [2024-12-05 19:41:22.759093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.526 [2024-12-05 19:41:22.759103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:55.526 [2024-12-05 19:41:22.759110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:22:55.526 [2024-12-05 19:41:22.759118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.787 [2024-12-05 19:41:22.785373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.787 [2024-12-05 19:41:22.785410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:55.787 [2024-12-05 19:41:22.785421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.186 ms 00:22:55.787 [2024-12-05 19:41:22.785429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.787 [2024-12-05 19:41:22.785564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.787 [2024-12-05 19:41:22.785574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:55.787 [2024-12-05 19:41:22.785582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:55.787 [2024-12-05 19:41:22.785589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.787 [2024-12-05 19:41:22.827535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.787 [2024-12-05 19:41:22.827581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:55.787 [2024-12-05 19:41:22.827596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.923 ms 00:22:55.787 [2024-12-05 19:41:22.827604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.787 [2024-12-05 19:41:22.827725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.787 [2024-12-05 19:41:22.827738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:55.787 [2024-12-05 19:41:22.827746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:55.787 [2024-12-05 19:41:22.827754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.828101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.828120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:55.788 [2024-12-05 19:41:22.828135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:22:55.788 [2024-12-05 19:41:22.828148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.828280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.828289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:55.788 [2024-12-05 19:41:22.828296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:22:55.788 [2024-12-05 19:41:22.828303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.841994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.842025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:55.788 [2024-12-05 19:41:22.842035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.669 ms 00:22:55.788 [2024-12-05 19:41:22.842043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.855442] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:55.788 [2024-12-05 19:41:22.855479] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:55.788 [2024-12-05 19:41:22.855492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.855501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:55.788 [2024-12-05 19:41:22.855510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.344 ms 00:22:55.788 [2024-12-05 19:41:22.855518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.880146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.880183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:55.788 [2024-12-05 19:41:22.880195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.552 ms 00:22:55.788 [2024-12-05 19:41:22.880203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.892342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.892499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:55.788 [2024-12-05 19:41:22.892515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.060 ms 00:22:55.788 [2024-12-05 19:41:22.892522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.904339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.904370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:55.788 [2024-12-05 19:41:22.904382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.750 ms 00:22:55.788 [2024-12-05 19:41:22.904390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.905049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.905070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:55.788 [2024-12-05 19:41:22.905079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:22:55.788 [2024-12-05 19:41:22.905086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.962383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.962440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:55.788 [2024-12-05 19:41:22.962455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.273 ms 00:22:55.788 [2024-12-05 19:41:22.962463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.973308] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:55.788 [2024-12-05 19:41:22.988483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.988527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:55.788 [2024-12-05 19:41:22.988541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.912 ms 00:22:55.788 [2024-12-05 19:41:22.988555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.988647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.988657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:55.788 [2024-12-05 19:41:22.988666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:55.788 [2024-12-05 19:41:22.988698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.988757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.988774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:55.788 [2024-12-05 19:41:22.988783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:55.788 [2024-12-05 19:41:22.988794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.988823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.988832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:55.788 [2024-12-05 19:41:22.988839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:55.788 [2024-12-05 19:41:22.988846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:22.988888] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:55.788 [2024-12-05 19:41:22.988898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:22.988906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:55.788 [2024-12-05 19:41:22.988914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:55.788 [2024-12-05 19:41:22.988921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:23.013622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:23.013662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:55.788 [2024-12-05 19:41:23.013687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.682 ms 00:22:55.788 [2024-12-05 19:41:23.013696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:23.013791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.788 [2024-12-05 19:41:23.013802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:55.788 [2024-12-05 19:41:23.013811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:55.788 [2024-12-05 19:41:23.013818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.788 [2024-12-05 19:41:23.014720] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:55.788 [2024-12-05 19:41:23.017721] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 283.464 ms, result 0 00:22:55.788 [2024-12-05 19:41:23.018994] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:55.788 [2024-12-05 19:41:23.032151] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:56.359  [2024-12-05T19:41:23.614Z] Copying: 4096/4096 [kB] (average 9637 kBps)[2024-12-05 19:41:23.460280] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:56.359 [2024-12-05 19:41:23.470075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.359 [2024-12-05 19:41:23.470235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:56.359 [2024-12-05 19:41:23.470261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:56.359 [2024-12-05 19:41:23.470269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.359 [2024-12-05 19:41:23.470296] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:56.359 [2024-12-05 19:41:23.472985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.359 [2024-12-05 19:41:23.473015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:56.359 [2024-12-05 19:41:23.473027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.675 ms 00:22:56.359 [2024-12-05 19:41:23.473035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.359 [2024-12-05 19:41:23.475740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.359 [2024-12-05 19:41:23.475773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:56.359 [2024-12-05 19:41:23.475782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.680 ms 00:22:56.359 [2024-12-05 19:41:23.475790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.359 [2024-12-05 19:41:23.480187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.359 [2024-12-05 19:41:23.480306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:56.359 [2024-12-05 19:41:23.480322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.377 ms 00:22:56.359 [2024-12-05 19:41:23.480330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.359 [2024-12-05 19:41:23.487270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.359 [2024-12-05 19:41:23.487387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:56.359 [2024-12-05 19:41:23.487402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.910 ms 00:22:56.359 [2024-12-05 19:41:23.487411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.359 [2024-12-05 19:41:23.511942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.359 [2024-12-05 19:41:23.511979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:56.359 [2024-12-05 19:41:23.511992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.483 ms 00:22:56.359 [2024-12-05 19:41:23.512000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.359 [2024-12-05 19:41:23.527682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.359 [2024-12-05 19:41:23.527727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:56.359 [2024-12-05 19:41:23.527742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.639 ms 00:22:56.359 [2024-12-05 19:41:23.527752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.359 [2024-12-05 19:41:23.527910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.359 [2024-12-05 19:41:23.527922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:56.359 [2024-12-05 19:41:23.527938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:56.359 [2024-12-05 19:41:23.527946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.359 [2024-12-05 19:41:23.552093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.359 [2024-12-05 19:41:23.552124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:56.359 [2024-12-05 19:41:23.552135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.129 ms 00:22:56.359 [2024-12-05 19:41:23.552142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.359 [2024-12-05 19:41:23.604340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.359 [2024-12-05 19:41:23.604380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:56.359 [2024-12-05 19:41:23.604392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.156 ms 00:22:56.359 [2024-12-05 19:41:23.604401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.623 [2024-12-05 19:41:23.628378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.623 [2024-12-05 19:41:23.628413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:56.623 [2024-12-05 19:41:23.628426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.933 ms 00:22:56.623 [2024-12-05 19:41:23.628434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.623 [2024-12-05 19:41:23.652193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.623 [2024-12-05 19:41:23.652227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:56.623 [2024-12-05 19:41:23.652238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.687 ms 00:22:56.623 [2024-12-05 19:41:23.652247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.623 [2024-12-05 19:41:23.652288] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:56.623 [2024-12-05 19:41:23.652303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:56.623 [2024-12-05 19:41:23.652486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.652997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.653004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.653011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.653019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.653033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.653049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.653057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.653065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.653073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.653080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.653088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.653096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:56.624 [2024-12-05 19:41:23.653114] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:56.624 [2024-12-05 19:41:23.653122] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e1e31a5-869f-42dd-82c7-3d82fe790364 00:22:56.624 [2024-12-05 19:41:23.653131] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:56.624 [2024-12-05 19:41:23.653139] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:56.624 [2024-12-05 19:41:23.653146] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:56.624 [2024-12-05 19:41:23.653155] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:56.624 [2024-12-05 19:41:23.653162] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:56.624 [2024-12-05 19:41:23.653169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:56.624 [2024-12-05 19:41:23.653180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:56.624 [2024-12-05 19:41:23.653186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:56.624 [2024-12-05 19:41:23.653193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:56.624 [2024-12-05 19:41:23.653200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.624 [2024-12-05 19:41:23.653207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:56.624 [2024-12-05 19:41:23.653215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:22:56.624 [2024-12-05 19:41:23.653222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.666148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.625 [2024-12-05 19:41:23.666181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:56.625 [2024-12-05 19:41:23.666193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.896 ms 00:22:56.625 [2024-12-05 19:41:23.666202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.666579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.625 [2024-12-05 19:41:23.666595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:56.625 [2024-12-05 19:41:23.666603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:22:56.625 [2024-12-05 19:41:23.666611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.702661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.625 [2024-12-05 19:41:23.702712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:56.625 [2024-12-05 19:41:23.702723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.625 [2024-12-05 19:41:23.702737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.702841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.625 [2024-12-05 19:41:23.702850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:56.625 [2024-12-05 19:41:23.702858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.625 [2024-12-05 19:41:23.702865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.702916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.625 [2024-12-05 19:41:23.702926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:56.625 [2024-12-05 19:41:23.702934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.625 [2024-12-05 19:41:23.702942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.702963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.625 [2024-12-05 19:41:23.702971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:56.625 [2024-12-05 19:41:23.702979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.625 [2024-12-05 19:41:23.702986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.784152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.625 [2024-12-05 19:41:23.784202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:56.625 [2024-12-05 19:41:23.784214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.625 [2024-12-05 19:41:23.784228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.850882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.625 [2024-12-05 19:41:23.850930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:56.625 [2024-12-05 19:41:23.850942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.625 [2024-12-05 19:41:23.850950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.851031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.625 [2024-12-05 19:41:23.851041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:56.625 [2024-12-05 19:41:23.851049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.625 [2024-12-05 19:41:23.851058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.851087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.625 [2024-12-05 19:41:23.851101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:56.625 [2024-12-05 19:41:23.851109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.625 [2024-12-05 19:41:23.851118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.851208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.625 [2024-12-05 19:41:23.851218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:56.625 [2024-12-05 19:41:23.851226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.625 [2024-12-05 19:41:23.851234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.851265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.625 [2024-12-05 19:41:23.851275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:56.625 [2024-12-05 19:41:23.851286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.625 [2024-12-05 19:41:23.851294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.851330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.625 [2024-12-05 19:41:23.851340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:56.625 [2024-12-05 19:41:23.851347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.625 [2024-12-05 19:41:23.851355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.851399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.625 [2024-12-05 19:41:23.851412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:56.625 [2024-12-05 19:41:23.851420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.625 [2024-12-05 19:41:23.851427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.625 [2024-12-05 19:41:23.851569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 381.510 ms, result 0 00:22:57.584 00:22:57.584 00:22:57.584 19:41:24 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:57.584 19:41:24 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77450 00:22:57.584 19:41:24 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77450 00:22:57.584 19:41:24 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77450 ']' 00:22:57.584 19:41:24 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.584 19:41:24 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.584 19:41:24 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.584 19:41:24 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.584 19:41:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:57.584 [2024-12-05 19:41:24.811187] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:22:57.584 [2024-12-05 19:41:24.811350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77450 ] 00:22:57.846 [2024-12-05 19:41:24.979052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.106 [2024-12-05 19:41:25.117596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.677 19:41:25 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.677 19:41:25 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:58.677 19:41:25 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:58.939 [2024-12-05 19:41:26.057759] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:58.939 [2024-12-05 19:41:26.057855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:59.202 [2024-12-05 19:41:26.239634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.202 [2024-12-05 19:41:26.239712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:59.202 [2024-12-05 19:41:26.239729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:59.202 [2024-12-05 19:41:26.239739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.202 [2024-12-05 19:41:26.242762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.202 [2024-12-05 19:41:26.242813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:59.202 [2024-12-05 19:41:26.242827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.000 ms 00:22:59.202 [2024-12-05 19:41:26.242835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.202 [2024-12-05 19:41:26.242954] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:59.202 [2024-12-05 19:41:26.243723] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:59.202 [2024-12-05 19:41:26.243756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.202 [2024-12-05 19:41:26.243764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:59.202 [2024-12-05 19:41:26.243776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:22:59.202 [2024-12-05 19:41:26.243783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.202 [2024-12-05 19:41:26.245609] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:59.202 [2024-12-05 19:41:26.260168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.202 [2024-12-05 19:41:26.260230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:59.202 [2024-12-05 19:41:26.260245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.564 ms 00:22:59.202 [2024-12-05 19:41:26.260257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.202 [2024-12-05 19:41:26.260374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.202 [2024-12-05 19:41:26.260388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:59.202 [2024-12-05 19:41:26.260398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:59.202 [2024-12-05 19:41:26.260408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.202 [2024-12-05 19:41:26.269066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.202 [2024-12-05 19:41:26.269117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:59.202 [2024-12-05 19:41:26.269128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.602 ms 00:22:59.202 [2024-12-05 19:41:26.269138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.202 [2024-12-05 19:41:26.269261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.202 [2024-12-05 19:41:26.269274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:59.202 [2024-12-05 19:41:26.269284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:59.202 [2024-12-05 19:41:26.269297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.202 [2024-12-05 19:41:26.269325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.202 [2024-12-05 19:41:26.269336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:59.202 [2024-12-05 19:41:26.269345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:59.202 [2024-12-05 19:41:26.269355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.202 [2024-12-05 19:41:26.269381] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:59.202 [2024-12-05 19:41:26.273374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.202 [2024-12-05 19:41:26.273414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:59.202 [2024-12-05 19:41:26.273427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.997 ms 00:22:59.202 [2024-12-05 19:41:26.273436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.202 [2024-12-05 19:41:26.273521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.202 [2024-12-05 19:41:26.273532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:59.202 [2024-12-05 19:41:26.273543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:59.202 [2024-12-05 19:41:26.273554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.202 [2024-12-05 19:41:26.273579] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:59.202 [2024-12-05 19:41:26.273603] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:59.203 [2024-12-05 19:41:26.273650] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:59.203 [2024-12-05 19:41:26.273682] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:59.203 [2024-12-05 19:41:26.273793] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:59.203 [2024-12-05 19:41:26.273805] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:59.203 [2024-12-05 19:41:26.273820] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:59.203 [2024-12-05 19:41:26.273831] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:59.203 [2024-12-05 19:41:26.273842] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:59.203 [2024-12-05 19:41:26.273851] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:59.203 [2024-12-05 19:41:26.273861] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:59.203 [2024-12-05 19:41:26.273869] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:59.203 [2024-12-05 19:41:26.273882] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:59.203 [2024-12-05 19:41:26.273891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.203 [2024-12-05 19:41:26.273900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:59.203 [2024-12-05 19:41:26.273907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:22:59.203 [2024-12-05 19:41:26.273917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.203 [2024-12-05 19:41:26.274006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.203 [2024-12-05 19:41:26.274016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:59.203 [2024-12-05 19:41:26.274024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:59.203 [2024-12-05 19:41:26.274034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.203 [2024-12-05 19:41:26.274135] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:59.203 [2024-12-05 19:41:26.274157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:59.203 [2024-12-05 19:41:26.274166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:59.203 [2024-12-05 19:41:26.274177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:59.203 [2024-12-05 19:41:26.274197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:59.203 [2024-12-05 19:41:26.274216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:59.203 [2024-12-05 19:41:26.274223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:59.203 [2024-12-05 19:41:26.274238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:59.203 [2024-12-05 19:41:26.274247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:59.203 [2024-12-05 19:41:26.274255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:59.203 [2024-12-05 19:41:26.274266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:59.203 [2024-12-05 19:41:26.274273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:59.203 [2024-12-05 19:41:26.274282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:59.203 [2024-12-05 19:41:26.274298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:59.203 [2024-12-05 19:41:26.274312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:59.203 [2024-12-05 19:41:26.274328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.203 [2024-12-05 19:41:26.274343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:59.203 [2024-12-05 19:41:26.274353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.203 [2024-12-05 19:41:26.274369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:59.203 [2024-12-05 19:41:26.274375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.203 [2024-12-05 19:41:26.274390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:59.203 [2024-12-05 19:41:26.274400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.203 [2024-12-05 19:41:26.274416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:59.203 [2024-12-05 19:41:26.274423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:59.203 [2024-12-05 19:41:26.274439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:59.203 [2024-12-05 19:41:26.274448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:59.203 [2024-12-05 19:41:26.274455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:59.203 [2024-12-05 19:41:26.274463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:59.203 [2024-12-05 19:41:26.274470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:59.203 [2024-12-05 19:41:26.274481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:59.203 [2024-12-05 19:41:26.274497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:59.203 [2024-12-05 19:41:26.274504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274512] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:59.203 [2024-12-05 19:41:26.274522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:59.203 [2024-12-05 19:41:26.274543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:59.203 [2024-12-05 19:41:26.274551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.203 [2024-12-05 19:41:26.274561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:59.203 [2024-12-05 19:41:26.274568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:59.203 [2024-12-05 19:41:26.274576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:59.203 [2024-12-05 19:41:26.274583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:59.203 [2024-12-05 19:41:26.274592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:59.203 [2024-12-05 19:41:26.274599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:59.203 [2024-12-05 19:41:26.274610] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:59.203 [2024-12-05 19:41:26.274619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:59.203 [2024-12-05 19:41:26.274634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:59.203 [2024-12-05 19:41:26.274642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:59.203 [2024-12-05 19:41:26.274652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:59.203 [2024-12-05 19:41:26.274661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:59.203 [2024-12-05 19:41:26.274695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:59.203 [2024-12-05 19:41:26.274703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:59.203 [2024-12-05 19:41:26.274712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:59.203 [2024-12-05 19:41:26.274721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:59.203 [2024-12-05 19:41:26.274730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:59.203 [2024-12-05 19:41:26.274738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:59.203 [2024-12-05 19:41:26.274748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:59.203 [2024-12-05 19:41:26.274755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:59.203 [2024-12-05 19:41:26.274766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:59.203 [2024-12-05 19:41:26.274773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:59.203 [2024-12-05 19:41:26.274783] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:59.203 [2024-12-05 19:41:26.274792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:59.203 [2024-12-05 19:41:26.274805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:59.203 [2024-12-05 19:41:26.274813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:59.203 [2024-12-05 19:41:26.274823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:59.203 [2024-12-05 19:41:26.274831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:59.203 [2024-12-05 19:41:26.274842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.203 [2024-12-05 19:41:26.274850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:59.203 [2024-12-05 19:41:26.274860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:22:59.203 [2024-12-05 19:41:26.274870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.203 [2024-12-05 19:41:26.307572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.204 [2024-12-05 19:41:26.307625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:59.204 [2024-12-05 19:41:26.307640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.632 ms 00:22:59.204 [2024-12-05 19:41:26.307652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.204 [2024-12-05 19:41:26.307821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.204 [2024-12-05 19:41:26.307834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:59.204 [2024-12-05 19:41:26.307845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:59.204 [2024-12-05 19:41:26.307853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.204 [2024-12-05 19:41:26.342766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.204 [2024-12-05 19:41:26.342818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:59.204 [2024-12-05 19:41:26.342832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.886 ms 00:22:59.204 [2024-12-05 19:41:26.342840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.204 [2024-12-05 19:41:26.342936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.204 [2024-12-05 19:41:26.342946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:59.204 [2024-12-05 19:41:26.342957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:59.204 [2024-12-05 19:41:26.342967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.204 [2024-12-05 19:41:26.343505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.204 [2024-12-05 19:41:26.343542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:59.204 [2024-12-05 19:41:26.343555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:22:59.204 [2024-12-05 19:41:26.343563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.204 [2024-12-05 19:41:26.343737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.204 [2024-12-05 19:41:26.343748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:59.204 [2024-12-05 19:41:26.343760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:22:59.204 [2024-12-05 19:41:26.343768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.204 [2024-12-05 19:41:26.361829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.204 [2024-12-05 19:41:26.361873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:59.204 [2024-12-05 19:41:26.361887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.034 ms 00:22:59.204 [2024-12-05 19:41:26.361896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.204 [2024-12-05 19:41:26.385634] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:59.204 [2024-12-05 19:41:26.385703] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:59.204 [2024-12-05 19:41:26.385724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.204 [2024-12-05 19:41:26.385737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:59.204 [2024-12-05 19:41:26.385752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.705 ms 00:22:59.204 [2024-12-05 19:41:26.385770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.204 [2024-12-05 19:41:26.412307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.204 [2024-12-05 19:41:26.412362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:59.204 [2024-12-05 19:41:26.412378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.419 ms 00:22:59.204 [2024-12-05 19:41:26.412386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.204 [2024-12-05 19:41:26.425556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.204 [2024-12-05 19:41:26.425616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:59.204 [2024-12-05 19:41:26.425633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.069 ms 00:22:59.204 [2024-12-05 19:41:26.425641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.204 [2024-12-05 19:41:26.438231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.204 [2024-12-05 19:41:26.438274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:59.204 [2024-12-05 19:41:26.438289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.484 ms 00:22:59.204 [2024-12-05 19:41:26.438296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.204 [2024-12-05 19:41:26.438991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.204 [2024-12-05 19:41:26.439023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:59.204 [2024-12-05 19:41:26.439036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:22:59.204 [2024-12-05 19:41:26.439044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.465 [2024-12-05 19:41:26.505316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.465 [2024-12-05 19:41:26.505392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:59.465 [2024-12-05 19:41:26.505411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.237 ms 00:22:59.465 [2024-12-05 19:41:26.505420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.465 [2024-12-05 19:41:26.517275] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:59.465 [2024-12-05 19:41:26.537316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.465 [2024-12-05 19:41:26.537383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:59.465 [2024-12-05 19:41:26.537403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.772 ms 00:22:59.465 [2024-12-05 19:41:26.537414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.465 [2024-12-05 19:41:26.537528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.465 [2024-12-05 19:41:26.537541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:59.465 [2024-12-05 19:41:26.537550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:59.465 [2024-12-05 19:41:26.537560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.465 [2024-12-05 19:41:26.537620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.465 [2024-12-05 19:41:26.537633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:59.465 [2024-12-05 19:41:26.537642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:59.465 [2024-12-05 19:41:26.537654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.465 [2024-12-05 19:41:26.537707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.465 [2024-12-05 19:41:26.537719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:59.465 [2024-12-05 19:41:26.537728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:59.465 [2024-12-05 19:41:26.537741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.465 [2024-12-05 19:41:26.537779] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:59.465 [2024-12-05 19:41:26.537792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.465 [2024-12-05 19:41:26.537804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:59.465 [2024-12-05 19:41:26.537814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:59.465 [2024-12-05 19:41:26.537821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.465 [2024-12-05 19:41:26.592657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.465 [2024-12-05 19:41:26.592727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:59.465 [2024-12-05 19:41:26.592745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.800 ms 00:22:59.465 [2024-12-05 19:41:26.592754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.465 [2024-12-05 19:41:26.592912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.465 [2024-12-05 19:41:26.592924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:59.465 [2024-12-05 19:41:26.592936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:59.465 [2024-12-05 19:41:26.592947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.465 [2024-12-05 19:41:26.594026] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:59.465 [2024-12-05 19:41:26.597709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 354.045 ms, result 0 00:22:59.465 [2024-12-05 19:41:26.600073] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:59.465 Some configs were skipped because the RPC state that can call them passed over. 00:22:59.465 19:41:26 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:59.727 [2024-12-05 19:41:26.849008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.727 [2024-12-05 19:41:26.849085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:59.727 [2024-12-05 19:41:26.849100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.134 ms 00:22:59.727 [2024-12-05 19:41:26.849112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.727 [2024-12-05 19:41:26.849151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.288 ms, result 0 00:22:59.727 true 00:22:59.727 19:41:26 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:59.988 [2024-12-05 19:41:27.086375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.988 [2024-12-05 19:41:27.086452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:59.988 [2024-12-05 19:41:27.086471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.070 ms 00:22:59.988 [2024-12-05 19:41:27.086479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.988 [2024-12-05 19:41:27.086524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.235 ms, result 0 00:22:59.988 true 00:22:59.988 19:41:27 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77450 00:22:59.988 19:41:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77450 ']' 00:22:59.988 19:41:27 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77450 00:22:59.988 19:41:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:59.988 19:41:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.988 19:41:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77450 00:22:59.988 killing process with pid 77450 00:22:59.988 19:41:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:59.988 19:41:27 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:59.988 19:41:27 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77450' 00:22:59.988 19:41:27 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77450 00:22:59.988 19:41:27 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77450 00:23:00.941 [2024-12-05 19:41:27.919098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.941 [2024-12-05 19:41:27.919176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:00.941 [2024-12-05 19:41:27.919191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:00.941 [2024-12-05 19:41:27.919201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.941 [2024-12-05 19:41:27.919227] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:00.941 [2024-12-05 19:41:27.922271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.942 [2024-12-05 19:41:27.922315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:00.942 [2024-12-05 19:41:27.922332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.022 ms 00:23:00.942 [2024-12-05 19:41:27.922341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.942 [2024-12-05 19:41:27.922651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.942 [2024-12-05 19:41:27.922663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:00.942 [2024-12-05 19:41:27.922688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:23:00.942 [2024-12-05 19:41:27.922696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.942 [2024-12-05 19:41:27.927544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.942 [2024-12-05 19:41:27.927591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:00.942 [2024-12-05 19:41:27.927609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.823 ms 00:23:00.942 [2024-12-05 19:41:27.927618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.942 [2024-12-05 19:41:27.934641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.942 [2024-12-05 19:41:27.934696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:00.942 [2024-12-05 19:41:27.934715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.972 ms 00:23:00.942 [2024-12-05 19:41:27.934724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.942 [2024-12-05 19:41:27.946341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.942 [2024-12-05 19:41:27.946401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:00.942 [2024-12-05 19:41:27.946419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.548 ms 00:23:00.942 [2024-12-05 19:41:27.946427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.942 [2024-12-05 19:41:27.954853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.942 [2024-12-05 19:41:27.954914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:00.942 [2024-12-05 19:41:27.954928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.369 ms 00:23:00.942 [2024-12-05 19:41:27.954938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.942 [2024-12-05 19:41:27.955100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.942 [2024-12-05 19:41:27.955112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:00.942 [2024-12-05 19:41:27.955124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:00.942 [2024-12-05 19:41:27.955132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.942 [2024-12-05 19:41:27.966548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.942 [2024-12-05 19:41:27.966596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:00.942 [2024-12-05 19:41:27.966610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.391 ms 00:23:00.942 [2024-12-05 19:41:27.966618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.942 [2024-12-05 19:41:27.977808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.942 [2024-12-05 19:41:27.977859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:00.942 [2024-12-05 19:41:27.977880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.135 ms 00:23:00.942 [2024-12-05 19:41:27.977888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.942 [2024-12-05 19:41:27.988455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.942 [2024-12-05 19:41:27.988502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:00.942 [2024-12-05 19:41:27.988516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.512 ms 00:23:00.942 [2024-12-05 19:41:27.988523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.942 [2024-12-05 19:41:27.998905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.942 [2024-12-05 19:41:27.998950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:00.942 [2024-12-05 19:41:27.998963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.298 ms 00:23:00.942 [2024-12-05 19:41:27.998971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.942 [2024-12-05 19:41:27.999018] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:00.942 [2024-12-05 19:41:27.999034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:00.942 [2024-12-05 19:41:27.999442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:00.943 [2024-12-05 19:41:27.999957] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:00.943 [2024-12-05 19:41:27.999973] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e1e31a5-869f-42dd-82c7-3d82fe790364 00:23:00.943 [2024-12-05 19:41:27.999985] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:00.943 [2024-12-05 19:41:27.999995] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:00.943 [2024-12-05 19:41:28.000003] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:00.943 [2024-12-05 19:41:28.000014] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:00.943 [2024-12-05 19:41:28.000022] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:00.943 [2024-12-05 19:41:28.000032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:00.943 [2024-12-05 19:41:28.000040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:00.943 [2024-12-05 19:41:28.000049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:00.943 [2024-12-05 19:41:28.000056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:00.943 [2024-12-05 19:41:28.000066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.943 [2024-12-05 19:41:28.000074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:00.943 [2024-12-05 19:41:28.000085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:23:00.943 [2024-12-05 19:41:28.000092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.943 [2024-12-05 19:41:28.014145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.943 [2024-12-05 19:41:28.014191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:00.943 [2024-12-05 19:41:28.014208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.007 ms 00:23:00.943 [2024-12-05 19:41:28.014218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.943 [2024-12-05 19:41:28.014643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.943 [2024-12-05 19:41:28.014663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:00.943 [2024-12-05 19:41:28.014700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:23:00.943 [2024-12-05 19:41:28.014709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.943 [2024-12-05 19:41:28.063733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.943 [2024-12-05 19:41:28.063793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:00.943 [2024-12-05 19:41:28.063807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.943 [2024-12-05 19:41:28.063816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.943 [2024-12-05 19:41:28.063932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.943 [2024-12-05 19:41:28.063942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:00.943 [2024-12-05 19:41:28.063956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.943 [2024-12-05 19:41:28.063964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.943 [2024-12-05 19:41:28.064021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.943 [2024-12-05 19:41:28.064031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:00.943 [2024-12-05 19:41:28.064044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.943 [2024-12-05 19:41:28.064052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.943 [2024-12-05 19:41:28.064072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.943 [2024-12-05 19:41:28.064080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:00.943 [2024-12-05 19:41:28.064090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.943 [2024-12-05 19:41:28.064100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.943 [2024-12-05 19:41:28.149866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.943 [2024-12-05 19:41:28.149945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:00.943 [2024-12-05 19:41:28.149963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.943 [2024-12-05 19:41:28.149972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.205 [2024-12-05 19:41:28.221119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:01.205 [2024-12-05 19:41:28.221179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:01.205 [2024-12-05 19:41:28.221195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:01.205 [2024-12-05 19:41:28.221207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.205 [2024-12-05 19:41:28.221303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:01.205 [2024-12-05 19:41:28.221313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:01.205 [2024-12-05 19:41:28.221327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:01.205 [2024-12-05 19:41:28.221335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.205 [2024-12-05 19:41:28.221371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:01.205 [2024-12-05 19:41:28.221380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:01.205 [2024-12-05 19:41:28.221390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:01.205 [2024-12-05 19:41:28.221398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.205 [2024-12-05 19:41:28.221504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:01.205 [2024-12-05 19:41:28.221514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:01.205 [2024-12-05 19:41:28.221525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:01.205 [2024-12-05 19:41:28.221533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.205 [2024-12-05 19:41:28.221570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:01.205 [2024-12-05 19:41:28.221580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:01.205 [2024-12-05 19:41:28.221590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:01.205 [2024-12-05 19:41:28.221598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.205 [2024-12-05 19:41:28.221645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:01.205 [2024-12-05 19:41:28.221655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:01.205 [2024-12-05 19:41:28.221684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:01.205 [2024-12-05 19:41:28.221694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.205 [2024-12-05 19:41:28.221745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:01.205 [2024-12-05 19:41:28.221756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:01.205 [2024-12-05 19:41:28.221767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:01.205 [2024-12-05 19:41:28.221775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.205 [2024-12-05 19:41:28.221931] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 302.804 ms, result 0 00:23:01.786 19:41:28 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:02.048 [2024-12-05 19:41:29.056242] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:23:02.048 [2024-12-05 19:41:29.056404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77504 ] 00:23:02.048 [2024-12-05 19:41:29.219830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.308 [2024-12-05 19:41:29.359101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.569 [2024-12-05 19:41:29.744256] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.569 [2024-12-05 19:41:29.744358] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.833 [2024-12-05 19:41:29.910139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.833 [2024-12-05 19:41:29.910224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:02.833 [2024-12-05 19:41:29.910239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:02.833 [2024-12-05 19:41:29.910249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.833 [2024-12-05 19:41:29.913482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.833 [2024-12-05 19:41:29.913538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.833 [2024-12-05 19:41:29.913550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:23:02.833 [2024-12-05 19:41:29.913559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.833 [2024-12-05 19:41:29.913710] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:02.833 [2024-12-05 19:41:29.914472] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:02.833 [2024-12-05 19:41:29.914505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.833 [2024-12-05 19:41:29.914514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.833 [2024-12-05 19:41:29.914525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:23:02.833 [2024-12-05 19:41:29.914533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.833 [2024-12-05 19:41:29.916318] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:02.833 [2024-12-05 19:41:29.930586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.833 [2024-12-05 19:41:29.930643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:02.833 [2024-12-05 19:41:29.930658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.270 ms 00:23:02.833 [2024-12-05 19:41:29.930667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.833 [2024-12-05 19:41:29.930805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.833 [2024-12-05 19:41:29.930819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:02.833 [2024-12-05 19:41:29.930829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:02.833 [2024-12-05 19:41:29.930837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.833 [2024-12-05 19:41:29.939385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.833 [2024-12-05 19:41:29.939431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.833 [2024-12-05 19:41:29.939442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.498 ms 00:23:02.833 [2024-12-05 19:41:29.939451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.833 [2024-12-05 19:41:29.939564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.833 [2024-12-05 19:41:29.939576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.833 [2024-12-05 19:41:29.939585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:02.833 [2024-12-05 19:41:29.939594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.834 [2024-12-05 19:41:29.939628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.834 [2024-12-05 19:41:29.939639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:02.834 [2024-12-05 19:41:29.939647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:02.834 [2024-12-05 19:41:29.939655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.834 [2024-12-05 19:41:29.939697] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:02.834 [2024-12-05 19:41:29.943853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.834 [2024-12-05 19:41:29.943900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.834 [2024-12-05 19:41:29.943912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.162 ms 00:23:02.834 [2024-12-05 19:41:29.943920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.834 [2024-12-05 19:41:29.944002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.834 [2024-12-05 19:41:29.944014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:02.834 [2024-12-05 19:41:29.944024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:02.834 [2024-12-05 19:41:29.944032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.834 [2024-12-05 19:41:29.944059] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:02.834 [2024-12-05 19:41:29.944083] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:02.834 [2024-12-05 19:41:29.944120] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:02.834 [2024-12-05 19:41:29.944137] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:02.834 [2024-12-05 19:41:29.944245] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:02.834 [2024-12-05 19:41:29.944255] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:02.834 [2024-12-05 19:41:29.944267] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:02.834 [2024-12-05 19:41:29.944280] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:02.834 [2024-12-05 19:41:29.944290] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:02.834 [2024-12-05 19:41:29.944299] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:02.834 [2024-12-05 19:41:29.944307] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:02.834 [2024-12-05 19:41:29.944316] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:02.834 [2024-12-05 19:41:29.944324] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:02.834 [2024-12-05 19:41:29.944334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.834 [2024-12-05 19:41:29.944341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:02.834 [2024-12-05 19:41:29.944349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:23:02.834 [2024-12-05 19:41:29.944357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.834 [2024-12-05 19:41:29.944446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.834 [2024-12-05 19:41:29.944482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:02.834 [2024-12-05 19:41:29.944491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:02.834 [2024-12-05 19:41:29.944499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.834 [2024-12-05 19:41:29.944607] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:02.834 [2024-12-05 19:41:29.944627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:02.834 [2024-12-05 19:41:29.944636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.834 [2024-12-05 19:41:29.944645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.834 [2024-12-05 19:41:29.944653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:02.834 [2024-12-05 19:41:29.944661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:02.834 [2024-12-05 19:41:29.944684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:02.834 [2024-12-05 19:41:29.944693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:02.834 [2024-12-05 19:41:29.944700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:02.834 [2024-12-05 19:41:29.944708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.834 [2024-12-05 19:41:29.944717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:02.834 [2024-12-05 19:41:29.944732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:02.834 [2024-12-05 19:41:29.944740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.834 [2024-12-05 19:41:29.944748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:02.834 [2024-12-05 19:41:29.944774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:02.834 [2024-12-05 19:41:29.944782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.834 [2024-12-05 19:41:29.944789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:02.834 [2024-12-05 19:41:29.944797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:02.834 [2024-12-05 19:41:29.944804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.834 [2024-12-05 19:41:29.944811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:02.834 [2024-12-05 19:41:29.944819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:02.834 [2024-12-05 19:41:29.944826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.834 [2024-12-05 19:41:29.944833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:02.834 [2024-12-05 19:41:29.944841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:02.834 [2024-12-05 19:41:29.944849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.834 [2024-12-05 19:41:29.944856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:02.834 [2024-12-05 19:41:29.944863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:02.834 [2024-12-05 19:41:29.944870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.834 [2024-12-05 19:41:29.944877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:02.834 [2024-12-05 19:41:29.944884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:02.834 [2024-12-05 19:41:29.944891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.834 [2024-12-05 19:41:29.944898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:02.834 [2024-12-05 19:41:29.944905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:02.834 [2024-12-05 19:41:29.944912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.834 [2024-12-05 19:41:29.944918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:02.834 [2024-12-05 19:41:29.944925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:02.834 [2024-12-05 19:41:29.944931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.834 [2024-12-05 19:41:29.944938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:02.834 [2024-12-05 19:41:29.944945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:02.834 [2024-12-05 19:41:29.944952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.834 [2024-12-05 19:41:29.944960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:02.834 [2024-12-05 19:41:29.944966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:02.834 [2024-12-05 19:41:29.944973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.834 [2024-12-05 19:41:29.944979] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:02.834 [2024-12-05 19:41:29.944987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:02.834 [2024-12-05 19:41:29.945001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.834 [2024-12-05 19:41:29.945010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.834 [2024-12-05 19:41:29.945019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:02.834 [2024-12-05 19:41:29.945026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:02.834 [2024-12-05 19:41:29.945033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:02.835 [2024-12-05 19:41:29.945040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:02.835 [2024-12-05 19:41:29.945046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:02.835 [2024-12-05 19:41:29.945053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:02.835 [2024-12-05 19:41:29.945061] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:02.835 [2024-12-05 19:41:29.945071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.835 [2024-12-05 19:41:29.945079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:02.835 [2024-12-05 19:41:29.945087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:02.835 [2024-12-05 19:41:29.945095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:02.835 [2024-12-05 19:41:29.945102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:02.835 [2024-12-05 19:41:29.945109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:02.835 [2024-12-05 19:41:29.945116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:02.835 [2024-12-05 19:41:29.945123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:02.835 [2024-12-05 19:41:29.945130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:02.835 [2024-12-05 19:41:29.945137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:02.835 [2024-12-05 19:41:29.945144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:02.835 [2024-12-05 19:41:29.945151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:02.835 [2024-12-05 19:41:29.945159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:02.835 [2024-12-05 19:41:29.945166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:02.835 [2024-12-05 19:41:29.945174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:02.835 [2024-12-05 19:41:29.945181] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:02.835 [2024-12-05 19:41:29.945188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.835 [2024-12-05 19:41:29.945197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:02.835 [2024-12-05 19:41:29.945205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:02.835 [2024-12-05 19:41:29.945214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:02.835 [2024-12-05 19:41:29.945221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:02.835 [2024-12-05 19:41:29.945229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.835 [2024-12-05 19:41:29.945241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:02.835 [2024-12-05 19:41:29.945252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:23:02.835 [2024-12-05 19:41:29.945279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.835 [2024-12-05 19:41:29.978766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.835 [2024-12-05 19:41:29.978827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.835 [2024-12-05 19:41:29.978840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.421 ms 00:23:02.835 [2024-12-05 19:41:29.978851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.835 [2024-12-05 19:41:29.979014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.835 [2024-12-05 19:41:29.979026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:02.835 [2024-12-05 19:41:29.979036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:02.835 [2024-12-05 19:41:29.979044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.835 [2024-12-05 19:41:30.031358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.835 [2024-12-05 19:41:30.031426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.835 [2024-12-05 19:41:30.031444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.289 ms 00:23:02.835 [2024-12-05 19:41:30.031452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.835 [2024-12-05 19:41:30.031592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.835 [2024-12-05 19:41:30.031605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.835 [2024-12-05 19:41:30.031616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:02.835 [2024-12-05 19:41:30.031624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.835 [2024-12-05 19:41:30.032226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.835 [2024-12-05 19:41:30.032268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.835 [2024-12-05 19:41:30.032289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:23:02.835 [2024-12-05 19:41:30.032298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.835 [2024-12-05 19:41:30.032463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.835 [2024-12-05 19:41:30.032474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.835 [2024-12-05 19:41:30.032483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:23:02.835 [2024-12-05 19:41:30.032491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.835 [2024-12-05 19:41:30.049007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.835 [2024-12-05 19:41:30.049054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.835 [2024-12-05 19:41:30.049065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.492 ms 00:23:02.835 [2024-12-05 19:41:30.049074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.835 [2024-12-05 19:41:30.063755] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:02.835 [2024-12-05 19:41:30.063809] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:02.835 [2024-12-05 19:41:30.063823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.835 [2024-12-05 19:41:30.063833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:02.835 [2024-12-05 19:41:30.063843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.623 ms 00:23:02.835 [2024-12-05 19:41:30.063851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.089585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-12-05 19:41:30.089642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:03.097 [2024-12-05 19:41:30.089657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.632 ms 00:23:03.097 [2024-12-05 19:41:30.089667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.102619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-12-05 19:41:30.102684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:03.097 [2024-12-05 19:41:30.102697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.841 ms 00:23:03.097 [2024-12-05 19:41:30.102705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.115896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-12-05 19:41:30.115944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:03.097 [2024-12-05 19:41:30.115957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.100 ms 00:23:03.097 [2024-12-05 19:41:30.115965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.116643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-12-05 19:41:30.116691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:03.097 [2024-12-05 19:41:30.116703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:23:03.097 [2024-12-05 19:41:30.116711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.183131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-12-05 19:41:30.183210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:03.097 [2024-12-05 19:41:30.183228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.390 ms 00:23:03.097 [2024-12-05 19:41:30.183237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.195205] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:03.097 [2024-12-05 19:41:30.215751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-12-05 19:41:30.215803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:03.097 [2024-12-05 19:41:30.215818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.382 ms 00:23:03.097 [2024-12-05 19:41:30.215833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.215952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-12-05 19:41:30.215964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:03.097 [2024-12-05 19:41:30.215975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:03.097 [2024-12-05 19:41:30.215984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.216044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-12-05 19:41:30.216054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:03.097 [2024-12-05 19:41:30.216063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:03.097 [2024-12-05 19:41:30.216076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.216109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-12-05 19:41:30.216119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:03.097 [2024-12-05 19:41:30.216127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:03.097 [2024-12-05 19:41:30.216135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.216176] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:03.097 [2024-12-05 19:41:30.216186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-12-05 19:41:30.216195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:03.097 [2024-12-05 19:41:30.216203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:03.097 [2024-12-05 19:41:30.216213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.243652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-12-05 19:41:30.243723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:03.097 [2024-12-05 19:41:30.243740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.416 ms 00:23:03.097 [2024-12-05 19:41:30.243749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.243888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-12-05 19:41:30.243900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:03.097 [2024-12-05 19:41:30.243911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:03.097 [2024-12-05 19:41:30.243919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-12-05 19:41:30.245123] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:03.097 [2024-12-05 19:41:30.248640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 334.621 ms, result 0 00:23:03.097 [2024-12-05 19:41:30.250272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:03.097 [2024-12-05 19:41:30.263999] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:04.477  [2024-12-05T19:41:32.674Z] Copying: 12/256 [MB] (12 MBps) [2024-12-05T19:41:33.617Z] Copying: 22000/262144 [kB] (9184 kBps) [2024-12-05T19:41:34.562Z] Copying: 31072/262144 [kB] (9072 kBps) [2024-12-05T19:41:35.507Z] Copying: 40568/262144 [kB] (9496 kBps) [2024-12-05T19:41:36.495Z] Copying: 49/256 [MB] (10 MBps) [2024-12-05T19:41:37.442Z] Copying: 59900/262144 [kB] (8980 kBps) [2024-12-05T19:41:38.386Z] Copying: 69432/262144 [kB] (9532 kBps) [2024-12-05T19:41:39.328Z] Copying: 77/256 [MB] (10 MBps) [2024-12-05T19:41:40.714Z] Copying: 89380/262144 [kB] (9516 kBps) [2024-12-05T19:41:41.658Z] Copying: 98896/262144 [kB] (9516 kBps) [2024-12-05T19:41:42.604Z] Copying: 106/256 [MB] (10 MBps) [2024-12-05T19:41:43.543Z] Copying: 116/256 [MB] (10 MBps) [2024-12-05T19:41:44.489Z] Copying: 129016/262144 [kB] (9352 kBps) [2024-12-05T19:41:45.430Z] Copying: 139060/262144 [kB] (10044 kBps) [2024-12-05T19:41:46.371Z] Copying: 146/256 [MB] (10 MBps) [2024-12-05T19:41:47.755Z] Copying: 159412/262144 [kB] (9456 kBps) [2024-12-05T19:41:48.328Z] Copying: 169316/262144 [kB] (9904 kBps) [2024-12-05T19:41:49.717Z] Copying: 179032/262144 [kB] (9716 kBps) [2024-12-05T19:41:50.660Z] Copying: 188940/262144 [kB] (9908 kBps) [2024-12-05T19:41:51.603Z] Copying: 194/256 [MB] (10 MBps) [2024-12-05T19:41:52.546Z] Copying: 206/256 [MB] (11 MBps) [2024-12-05T19:41:53.506Z] Copying: 216/256 [MB] (10 MBps) [2024-12-05T19:41:54.452Z] Copying: 227/256 [MB] (10 MBps) [2024-12-05T19:41:55.396Z] Copying: 242624/262144 [kB] (9796 kBps) [2024-12-05T19:41:56.339Z] Copying: 251992/262144 [kB] (9368 kBps) [2024-12-05T19:41:56.611Z] Copying: 261724/262144 [kB] (9732 kBps) [2024-12-05T19:41:56.611Z] Copying: 256/256 [MB] (average 10064 kBps)[2024-12-05 19:41:56.530885] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:29.356 [2024-12-05 19:41:56.545088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.356 [2024-12-05 19:41:56.545151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:29.356 [2024-12-05 19:41:56.545177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:29.356 [2024-12-05 19:41:56.545188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.356 [2024-12-05 19:41:56.545217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:29.356 [2024-12-05 19:41:56.548241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.356 [2024-12-05 19:41:56.548283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:29.356 [2024-12-05 19:41:56.548295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.005 ms 00:23:29.356 [2024-12-05 19:41:56.548305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.356 [2024-12-05 19:41:56.548607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.356 [2024-12-05 19:41:56.548617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:29.356 [2024-12-05 19:41:56.548627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:23:29.356 [2024-12-05 19:41:56.548636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.356 [2024-12-05 19:41:56.552378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.356 [2024-12-05 19:41:56.552399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:29.356 [2024-12-05 19:41:56.552410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.718 ms 00:23:29.356 [2024-12-05 19:41:56.552419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.356 [2024-12-05 19:41:56.559480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.356 [2024-12-05 19:41:56.559517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:29.356 [2024-12-05 19:41:56.559529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.041 ms 00:23:29.357 [2024-12-05 19:41:56.559539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.357 [2024-12-05 19:41:56.587483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.357 [2024-12-05 19:41:56.587570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:29.357 [2024-12-05 19:41:56.587587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.855 ms 00:23:29.357 [2024-12-05 19:41:56.587596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.639 [2024-12-05 19:41:56.604569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.639 [2024-12-05 19:41:56.604634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:29.639 [2024-12-05 19:41:56.604662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.867 ms 00:23:29.639 [2024-12-05 19:41:56.604695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.639 [2024-12-05 19:41:56.604906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.639 [2024-12-05 19:41:56.604920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:29.639 [2024-12-05 19:41:56.604941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:23:29.639 [2024-12-05 19:41:56.604950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.639 [2024-12-05 19:41:56.632126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.639 [2024-12-05 19:41:56.632189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:29.639 [2024-12-05 19:41:56.632206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.154 ms 00:23:29.639 [2024-12-05 19:41:56.632214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.639 [2024-12-05 19:41:56.658975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.639 [2024-12-05 19:41:56.659039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:29.639 [2024-12-05 19:41:56.659055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.644 ms 00:23:29.639 [2024-12-05 19:41:56.659065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.639 [2024-12-05 19:41:56.685255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.639 [2024-12-05 19:41:56.685330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:29.639 [2024-12-05 19:41:56.685345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.103 ms 00:23:29.639 [2024-12-05 19:41:56.685353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.639 [2024-12-05 19:41:56.710662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.639 [2024-12-05 19:41:56.710754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:29.639 [2024-12-05 19:41:56.710769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.164 ms 00:23:29.639 [2024-12-05 19:41:56.710778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.639 [2024-12-05 19:41:56.710847] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:29.639 [2024-12-05 19:41:56.710867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.710994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:29.639 [2024-12-05 19:41:56.711143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:29.640 [2024-12-05 19:41:56.711762] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:29.640 [2024-12-05 19:41:56.711771] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e1e31a5-869f-42dd-82c7-3d82fe790364 00:23:29.640 [2024-12-05 19:41:56.711780] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:29.640 [2024-12-05 19:41:56.711789] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:29.640 [2024-12-05 19:41:56.711797] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:29.640 [2024-12-05 19:41:56.711806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:29.640 [2024-12-05 19:41:56.711814] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:29.640 [2024-12-05 19:41:56.711822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:29.640 [2024-12-05 19:41:56.711835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:29.640 [2024-12-05 19:41:56.711841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:29.640 [2024-12-05 19:41:56.711849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:29.640 [2024-12-05 19:41:56.711858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.640 [2024-12-05 19:41:56.711866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:29.640 [2024-12-05 19:41:56.711876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:23:29.640 [2024-12-05 19:41:56.711883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.640 [2024-12-05 19:41:56.725853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.640 [2024-12-05 19:41:56.725919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:29.640 [2024-12-05 19:41:56.725934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.925 ms 00:23:29.640 [2024-12-05 19:41:56.725943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.640 [2024-12-05 19:41:56.726375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.640 [2024-12-05 19:41:56.726395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:29.640 [2024-12-05 19:41:56.726407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:23:29.640 [2024-12-05 19:41:56.726415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.640 [2024-12-05 19:41:56.765281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.640 [2024-12-05 19:41:56.765363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:29.640 [2024-12-05 19:41:56.765377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.640 [2024-12-05 19:41:56.765394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.640 [2024-12-05 19:41:56.765543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.640 [2024-12-05 19:41:56.765554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:29.641 [2024-12-05 19:41:56.765563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.641 [2024-12-05 19:41:56.765571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.641 [2024-12-05 19:41:56.765636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.641 [2024-12-05 19:41:56.765645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:29.641 [2024-12-05 19:41:56.765654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.641 [2024-12-05 19:41:56.765662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.641 [2024-12-05 19:41:56.765702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.641 [2024-12-05 19:41:56.765712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:29.641 [2024-12-05 19:41:56.765721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.641 [2024-12-05 19:41:56.765729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.641 [2024-12-05 19:41:56.853342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.641 [2024-12-05 19:41:56.853424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:29.641 [2024-12-05 19:41:56.853439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.641 [2024-12-05 19:41:56.853448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.903 [2024-12-05 19:41:56.925822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.903 [2024-12-05 19:41:56.925904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:29.903 [2024-12-05 19:41:56.925920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.903 [2024-12-05 19:41:56.925929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.903 [2024-12-05 19:41:56.926041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.903 [2024-12-05 19:41:56.926052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:29.903 [2024-12-05 19:41:56.926061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.903 [2024-12-05 19:41:56.926070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.903 [2024-12-05 19:41:56.926104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.903 [2024-12-05 19:41:56.926117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:29.903 [2024-12-05 19:41:56.926126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.903 [2024-12-05 19:41:56.926134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.903 [2024-12-05 19:41:56.926237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.903 [2024-12-05 19:41:56.926248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:29.903 [2024-12-05 19:41:56.926256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.903 [2024-12-05 19:41:56.926264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.903 [2024-12-05 19:41:56.926301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.903 [2024-12-05 19:41:56.926311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:29.903 [2024-12-05 19:41:56.926324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.903 [2024-12-05 19:41:56.926333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.903 [2024-12-05 19:41:56.926378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.903 [2024-12-05 19:41:56.926388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:29.903 [2024-12-05 19:41:56.926397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.903 [2024-12-05 19:41:56.926405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.903 [2024-12-05 19:41:56.926454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.903 [2024-12-05 19:41:56.926468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:29.903 [2024-12-05 19:41:56.926477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.903 [2024-12-05 19:41:56.926485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.903 [2024-12-05 19:41:56.926645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 381.566 ms, result 0 00:23:30.476 00:23:30.476 00:23:30.738 19:41:57 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:31.310 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:23:31.310 19:41:58 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:23:31.310 19:41:58 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:23:31.310 19:41:58 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:31.310 19:41:58 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:31.310 19:41:58 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:23:31.310 19:41:58 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:31.310 19:41:58 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77450 00:23:31.310 19:41:58 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77450 ']' 00:23:31.310 19:41:58 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77450 00:23:31.310 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77450) - No such process 00:23:31.310 Process with pid 77450 is not found 00:23:31.310 19:41:58 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 77450 is not found' 00:23:31.310 ************************************ 00:23:31.310 END TEST ftl_trim 00:23:31.310 ************************************ 00:23:31.310 00:23:31.310 real 1m40.986s 00:23:31.310 user 2m1.741s 00:23:31.310 sys 0m10.532s 00:23:31.310 19:41:58 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:31.310 19:41:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:31.310 19:41:58 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:31.310 19:41:58 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:31.310 19:41:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:31.310 19:41:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:31.310 ************************************ 00:23:31.310 START TEST ftl_restore 00:23:31.310 ************************************ 00:23:31.310 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:31.310 * Looking for test storage... 00:23:31.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:31.310 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:31.310 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:23:31.310 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:31.572 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.572 19:41:58 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:23:31.572 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.572 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:31.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.572 --rc genhtml_branch_coverage=1 00:23:31.572 --rc genhtml_function_coverage=1 00:23:31.572 --rc genhtml_legend=1 00:23:31.572 --rc geninfo_all_blocks=1 00:23:31.572 --rc geninfo_unexecuted_blocks=1 00:23:31.572 00:23:31.572 ' 00:23:31.572 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:31.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.572 --rc genhtml_branch_coverage=1 00:23:31.572 --rc genhtml_function_coverage=1 00:23:31.572 --rc genhtml_legend=1 00:23:31.572 --rc geninfo_all_blocks=1 00:23:31.572 --rc geninfo_unexecuted_blocks=1 00:23:31.572 00:23:31.572 ' 00:23:31.572 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:31.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.572 --rc genhtml_branch_coverage=1 00:23:31.572 --rc genhtml_function_coverage=1 00:23:31.572 --rc genhtml_legend=1 00:23:31.572 --rc geninfo_all_blocks=1 00:23:31.572 --rc geninfo_unexecuted_blocks=1 00:23:31.572 00:23:31.572 ' 00:23:31.572 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:31.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.572 --rc genhtml_branch_coverage=1 00:23:31.572 --rc genhtml_function_coverage=1 00:23:31.572 --rc genhtml_legend=1 00:23:31.572 --rc geninfo_all_blocks=1 00:23:31.572 --rc geninfo_unexecuted_blocks=1 00:23:31.572 00:23:31.572 ' 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.dv1zA0SiKs 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77875 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77875 00:23:31.572 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77875 ']' 00:23:31.572 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.572 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.572 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.572 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.572 19:41:58 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:31.572 19:41:58 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:31.572 [2024-12-05 19:41:58.689523] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:23:31.572 [2024-12-05 19:41:58.689654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77875 ] 00:23:31.833 [2024-12-05 19:41:58.842584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.833 [2024-12-05 19:41:58.950915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.404 19:41:59 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.404 19:41:59 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:23:32.404 19:41:59 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:32.404 19:41:59 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:23:32.404 19:41:59 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:32.404 19:41:59 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:23:32.404 19:41:59 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:23:32.404 19:41:59 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:32.687 19:41:59 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:32.687 19:41:59 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:23:32.687 19:41:59 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:32.687 19:41:59 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:32.687 19:41:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:32.687 19:41:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:32.687 19:41:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:32.687 19:41:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:32.950 19:42:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:32.950 { 00:23:32.950 "name": "nvme0n1", 00:23:32.950 "aliases": [ 00:23:32.950 "0ce8ff4a-ce1d-4665-8972-9d5cb4fd8f33" 00:23:32.950 ], 00:23:32.950 "product_name": "NVMe disk", 00:23:32.950 "block_size": 4096, 00:23:32.950 "num_blocks": 1310720, 00:23:32.950 "uuid": "0ce8ff4a-ce1d-4665-8972-9d5cb4fd8f33", 00:23:32.950 "numa_id": -1, 00:23:32.950 "assigned_rate_limits": { 00:23:32.950 "rw_ios_per_sec": 0, 00:23:32.950 "rw_mbytes_per_sec": 0, 00:23:32.950 "r_mbytes_per_sec": 0, 00:23:32.950 "w_mbytes_per_sec": 0 00:23:32.950 }, 00:23:32.950 "claimed": true, 00:23:32.950 "claim_type": "read_many_write_one", 00:23:32.950 "zoned": false, 00:23:32.950 "supported_io_types": { 00:23:32.950 "read": true, 00:23:32.950 "write": true, 00:23:32.950 "unmap": true, 00:23:32.950 "flush": true, 00:23:32.950 "reset": true, 00:23:32.950 "nvme_admin": true, 00:23:32.950 "nvme_io": true, 00:23:32.950 "nvme_io_md": false, 00:23:32.950 "write_zeroes": true, 00:23:32.950 "zcopy": false, 00:23:32.950 "get_zone_info": false, 00:23:32.950 "zone_management": false, 00:23:32.950 "zone_append": false, 00:23:32.950 "compare": true, 00:23:32.950 "compare_and_write": false, 00:23:32.950 "abort": true, 00:23:32.950 "seek_hole": false, 00:23:32.950 "seek_data": false, 00:23:32.950 "copy": true, 00:23:32.950 "nvme_iov_md": false 00:23:32.950 }, 00:23:32.950 "driver_specific": { 00:23:32.950 "nvme": [ 00:23:32.950 { 00:23:32.950 "pci_address": "0000:00:11.0", 00:23:32.950 "trid": { 00:23:32.950 "trtype": "PCIe", 00:23:32.950 "traddr": "0000:00:11.0" 00:23:32.950 }, 00:23:32.950 "ctrlr_data": { 00:23:32.950 "cntlid": 0, 00:23:32.950 "vendor_id": "0x1b36", 00:23:32.950 "model_number": "QEMU NVMe Ctrl", 00:23:32.950 "serial_number": "12341", 00:23:32.950 "firmware_revision": "8.0.0", 00:23:32.950 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:32.951 "oacs": { 00:23:32.951 "security": 0, 00:23:32.951 "format": 1, 00:23:32.951 "firmware": 0, 00:23:32.951 "ns_manage": 1 00:23:32.951 }, 00:23:32.951 "multi_ctrlr": false, 00:23:32.951 "ana_reporting": false 00:23:32.951 }, 00:23:32.951 "vs": { 00:23:32.951 "nvme_version": "1.4" 00:23:32.951 }, 00:23:32.951 "ns_data": { 00:23:32.951 "id": 1, 00:23:32.951 "can_share": false 00:23:32.951 } 00:23:32.951 } 00:23:32.951 ], 00:23:32.951 "mp_policy": "active_passive" 00:23:32.951 } 00:23:32.951 } 00:23:32.951 ]' 00:23:32.951 19:42:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:32.951 19:42:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:32.951 19:42:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:32.951 19:42:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:32.951 19:42:00 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:32.951 19:42:00 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:23:32.951 19:42:00 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:23:32.951 19:42:00 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:32.951 19:42:00 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:23:32.951 19:42:00 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:32.951 19:42:00 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:33.212 19:42:00 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=afa4f4f1-fad5-4ca1-b9ec-6a51cb9c58e4 00:23:33.212 19:42:00 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:23:33.212 19:42:00 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u afa4f4f1-fad5-4ca1-b9ec-6a51cb9c58e4 00:23:33.473 19:42:00 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:33.735 19:42:00 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=5ccacc13-b3d3-43f4-94c1-de7de40bacfd 00:23:33.735 19:42:00 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5ccacc13-b3d3-43f4-94c1-de7de40bacfd 00:23:33.995 19:42:01 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b 00:23:33.995 19:42:01 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:33.995 19:42:01 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b 00:23:33.995 19:42:01 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:23:33.995 19:42:01 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:33.995 19:42:01 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b 00:23:33.995 19:42:01 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:23:33.995 19:42:01 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b 00:23:33.995 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b 00:23:33.995 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:33.995 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:33.995 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:33.995 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b 00:23:34.257 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:34.257 { 00:23:34.257 "name": "dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b", 00:23:34.257 "aliases": [ 00:23:34.257 "lvs/nvme0n1p0" 00:23:34.257 ], 00:23:34.257 "product_name": "Logical Volume", 00:23:34.257 "block_size": 4096, 00:23:34.257 "num_blocks": 26476544, 00:23:34.257 "uuid": "dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b", 00:23:34.257 "assigned_rate_limits": { 00:23:34.257 "rw_ios_per_sec": 0, 00:23:34.257 "rw_mbytes_per_sec": 0, 00:23:34.257 "r_mbytes_per_sec": 0, 00:23:34.257 "w_mbytes_per_sec": 0 00:23:34.257 }, 00:23:34.257 "claimed": false, 00:23:34.257 "zoned": false, 00:23:34.257 "supported_io_types": { 00:23:34.257 "read": true, 00:23:34.257 "write": true, 00:23:34.257 "unmap": true, 00:23:34.257 "flush": false, 00:23:34.257 "reset": true, 00:23:34.257 "nvme_admin": false, 00:23:34.257 "nvme_io": false, 00:23:34.257 "nvme_io_md": false, 00:23:34.257 "write_zeroes": true, 00:23:34.257 "zcopy": false, 00:23:34.257 "get_zone_info": false, 00:23:34.257 "zone_management": false, 00:23:34.257 "zone_append": false, 00:23:34.257 "compare": false, 00:23:34.257 "compare_and_write": false, 00:23:34.257 "abort": false, 00:23:34.257 "seek_hole": true, 00:23:34.257 "seek_data": true, 00:23:34.257 "copy": false, 00:23:34.257 "nvme_iov_md": false 00:23:34.257 }, 00:23:34.257 "driver_specific": { 00:23:34.257 "lvol": { 00:23:34.257 "lvol_store_uuid": "5ccacc13-b3d3-43f4-94c1-de7de40bacfd", 00:23:34.257 "base_bdev": "nvme0n1", 00:23:34.257 "thin_provision": true, 00:23:34.257 "num_allocated_clusters": 0, 00:23:34.257 "snapshot": false, 00:23:34.257 "clone": false, 00:23:34.257 "esnap_clone": false 00:23:34.257 } 00:23:34.257 } 00:23:34.257 } 00:23:34.257 ]' 00:23:34.257 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:34.257 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:34.257 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:34.257 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:34.257 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:34.257 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:34.257 19:42:01 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:23:34.257 19:42:01 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:23:34.257 19:42:01 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:34.519 19:42:01 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:34.519 19:42:01 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:34.519 19:42:01 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b 00:23:34.519 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b 00:23:34.519 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:34.519 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:34.519 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:34.519 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b 00:23:34.781 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:34.781 { 00:23:34.781 "name": "dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b", 00:23:34.781 "aliases": [ 00:23:34.781 "lvs/nvme0n1p0" 00:23:34.781 ], 00:23:34.781 "product_name": "Logical Volume", 00:23:34.781 "block_size": 4096, 00:23:34.781 "num_blocks": 26476544, 00:23:34.781 "uuid": "dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b", 00:23:34.781 "assigned_rate_limits": { 00:23:34.781 "rw_ios_per_sec": 0, 00:23:34.781 "rw_mbytes_per_sec": 0, 00:23:34.781 "r_mbytes_per_sec": 0, 00:23:34.781 "w_mbytes_per_sec": 0 00:23:34.781 }, 00:23:34.781 "claimed": false, 00:23:34.781 "zoned": false, 00:23:34.781 "supported_io_types": { 00:23:34.781 "read": true, 00:23:34.781 "write": true, 00:23:34.781 "unmap": true, 00:23:34.781 "flush": false, 00:23:34.781 "reset": true, 00:23:34.781 "nvme_admin": false, 00:23:34.781 "nvme_io": false, 00:23:34.781 "nvme_io_md": false, 00:23:34.781 "write_zeroes": true, 00:23:34.781 "zcopy": false, 00:23:34.781 "get_zone_info": false, 00:23:34.781 "zone_management": false, 00:23:34.781 "zone_append": false, 00:23:34.781 "compare": false, 00:23:34.781 "compare_and_write": false, 00:23:34.781 "abort": false, 00:23:34.781 "seek_hole": true, 00:23:34.781 "seek_data": true, 00:23:34.781 "copy": false, 00:23:34.781 "nvme_iov_md": false 00:23:34.781 }, 00:23:34.781 "driver_specific": { 00:23:34.781 "lvol": { 00:23:34.781 "lvol_store_uuid": "5ccacc13-b3d3-43f4-94c1-de7de40bacfd", 00:23:34.781 "base_bdev": "nvme0n1", 00:23:34.781 "thin_provision": true, 00:23:34.781 "num_allocated_clusters": 0, 00:23:34.781 "snapshot": false, 00:23:34.781 "clone": false, 00:23:34.781 "esnap_clone": false 00:23:34.781 } 00:23:34.781 } 00:23:34.781 } 00:23:34.781 ]' 00:23:34.781 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:34.781 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:34.781 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:34.781 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:34.781 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:34.781 19:42:01 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:34.781 19:42:01 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:23:34.781 19:42:01 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:35.042 19:42:02 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:35.042 19:42:02 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b 00:23:35.042 19:42:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b 00:23:35.042 19:42:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:35.042 19:42:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:35.042 19:42:02 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:35.042 19:42:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b 00:23:35.303 19:42:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:35.303 { 00:23:35.303 "name": "dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b", 00:23:35.303 "aliases": [ 00:23:35.303 "lvs/nvme0n1p0" 00:23:35.303 ], 00:23:35.303 "product_name": "Logical Volume", 00:23:35.303 "block_size": 4096, 00:23:35.303 "num_blocks": 26476544, 00:23:35.303 "uuid": "dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b", 00:23:35.303 "assigned_rate_limits": { 00:23:35.303 "rw_ios_per_sec": 0, 00:23:35.303 "rw_mbytes_per_sec": 0, 00:23:35.303 "r_mbytes_per_sec": 0, 00:23:35.303 "w_mbytes_per_sec": 0 00:23:35.303 }, 00:23:35.303 "claimed": false, 00:23:35.303 "zoned": false, 00:23:35.303 "supported_io_types": { 00:23:35.303 "read": true, 00:23:35.303 "write": true, 00:23:35.303 "unmap": true, 00:23:35.303 "flush": false, 00:23:35.303 "reset": true, 00:23:35.303 "nvme_admin": false, 00:23:35.303 "nvme_io": false, 00:23:35.303 "nvme_io_md": false, 00:23:35.303 "write_zeroes": true, 00:23:35.303 "zcopy": false, 00:23:35.303 "get_zone_info": false, 00:23:35.303 "zone_management": false, 00:23:35.303 "zone_append": false, 00:23:35.303 "compare": false, 00:23:35.303 "compare_and_write": false, 00:23:35.303 "abort": false, 00:23:35.303 "seek_hole": true, 00:23:35.303 "seek_data": true, 00:23:35.303 "copy": false, 00:23:35.303 "nvme_iov_md": false 00:23:35.303 }, 00:23:35.303 "driver_specific": { 00:23:35.303 "lvol": { 00:23:35.303 "lvol_store_uuid": "5ccacc13-b3d3-43f4-94c1-de7de40bacfd", 00:23:35.303 "base_bdev": "nvme0n1", 00:23:35.303 "thin_provision": true, 00:23:35.303 "num_allocated_clusters": 0, 00:23:35.303 "snapshot": false, 00:23:35.303 "clone": false, 00:23:35.303 "esnap_clone": false 00:23:35.303 } 00:23:35.303 } 00:23:35.303 } 00:23:35.303 ]' 00:23:35.303 19:42:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:35.303 19:42:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:35.303 19:42:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:35.303 19:42:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:35.303 19:42:02 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:35.303 19:42:02 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:35.303 19:42:02 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:35.303 19:42:02 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b --l2p_dram_limit 10' 00:23:35.303 19:42:02 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:35.303 19:42:02 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:35.303 19:42:02 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:35.303 19:42:02 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:23:35.303 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:23:35.303 19:42:02 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dcbbd3b8-4b4d-4c3c-91a6-e79deb665e6b --l2p_dram_limit 10 -c nvc0n1p0 00:23:35.565 [2024-12-05 19:42:02.578196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.565 [2024-12-05 19:42:02.578259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:35.565 [2024-12-05 19:42:02.578276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:35.565 [2024-12-05 19:42:02.578286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.565 [2024-12-05 19:42:02.578359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.565 [2024-12-05 19:42:02.578371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:35.565 [2024-12-05 19:42:02.578383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:35.565 [2024-12-05 19:42:02.578392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.565 [2024-12-05 19:42:02.578418] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:35.565 [2024-12-05 19:42:02.579210] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:35.565 [2024-12-05 19:42:02.579237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.565 [2024-12-05 19:42:02.579246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:35.565 [2024-12-05 19:42:02.579256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:23:35.565 [2024-12-05 19:42:02.579264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.565 [2024-12-05 19:42:02.579396] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f2a2c721-9529-45cb-beae-3973f9aeaf2f 00:23:35.565 [2024-12-05 19:42:02.580532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.565 [2024-12-05 19:42:02.580566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:35.565 [2024-12-05 19:42:02.580576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:35.565 [2024-12-05 19:42:02.580585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.565 [2024-12-05 19:42:02.586164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.565 [2024-12-05 19:42:02.586209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:35.565 [2024-12-05 19:42:02.586221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.536 ms 00:23:35.565 [2024-12-05 19:42:02.586232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.565 [2024-12-05 19:42:02.586327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.565 [2024-12-05 19:42:02.586338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:35.565 [2024-12-05 19:42:02.586346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:35.565 [2024-12-05 19:42:02.586358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.565 [2024-12-05 19:42:02.586415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.565 [2024-12-05 19:42:02.586426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:35.565 [2024-12-05 19:42:02.586436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:35.565 [2024-12-05 19:42:02.586445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.565 [2024-12-05 19:42:02.586467] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:35.565 [2024-12-05 19:42:02.590243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.565 [2024-12-05 19:42:02.590274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:35.565 [2024-12-05 19:42:02.590287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.779 ms 00:23:35.565 [2024-12-05 19:42:02.590294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.565 [2024-12-05 19:42:02.590333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.565 [2024-12-05 19:42:02.590342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:35.565 [2024-12-05 19:42:02.590352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:35.565 [2024-12-05 19:42:02.590359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.565 [2024-12-05 19:42:02.590404] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:35.565 [2024-12-05 19:42:02.590546] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:35.565 [2024-12-05 19:42:02.590562] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:35.565 [2024-12-05 19:42:02.590572] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:35.565 [2024-12-05 19:42:02.590583] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:35.565 [2024-12-05 19:42:02.590592] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:35.565 [2024-12-05 19:42:02.590601] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:35.565 [2024-12-05 19:42:02.590608] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:35.565 [2024-12-05 19:42:02.590620] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:35.565 [2024-12-05 19:42:02.590627] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:35.565 [2024-12-05 19:42:02.590636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.565 [2024-12-05 19:42:02.590649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:35.565 [2024-12-05 19:42:02.590658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:23:35.565 [2024-12-05 19:42:02.590665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.565 [2024-12-05 19:42:02.590764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.565 [2024-12-05 19:42:02.590773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:35.565 [2024-12-05 19:42:02.590785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:35.565 [2024-12-05 19:42:02.590793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.565 [2024-12-05 19:42:02.590898] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:35.565 [2024-12-05 19:42:02.590907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:35.565 [2024-12-05 19:42:02.590917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:35.565 [2024-12-05 19:42:02.590924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.565 [2024-12-05 19:42:02.590933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:35.565 [2024-12-05 19:42:02.590940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:35.565 [2024-12-05 19:42:02.590948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:35.565 [2024-12-05 19:42:02.590954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:35.565 [2024-12-05 19:42:02.590964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:35.565 [2024-12-05 19:42:02.590971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:35.565 [2024-12-05 19:42:02.590979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:35.565 [2024-12-05 19:42:02.590985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:35.565 [2024-12-05 19:42:02.590993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:35.565 [2024-12-05 19:42:02.591000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:35.565 [2024-12-05 19:42:02.591008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:35.565 [2024-12-05 19:42:02.591014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.565 [2024-12-05 19:42:02.591024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:35.565 [2024-12-05 19:42:02.591031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:35.565 [2024-12-05 19:42:02.591038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.565 [2024-12-05 19:42:02.591045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:35.565 [2024-12-05 19:42:02.591053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:35.565 [2024-12-05 19:42:02.591059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.565 [2024-12-05 19:42:02.591067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:35.565 [2024-12-05 19:42:02.591075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:35.565 [2024-12-05 19:42:02.591083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.565 [2024-12-05 19:42:02.591090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:35.565 [2024-12-05 19:42:02.591098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:35.565 [2024-12-05 19:42:02.591105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.565 [2024-12-05 19:42:02.591113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:35.566 [2024-12-05 19:42:02.591119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:35.566 [2024-12-05 19:42:02.591127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.566 [2024-12-05 19:42:02.591133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:35.566 [2024-12-05 19:42:02.591143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:35.566 [2024-12-05 19:42:02.591149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:35.566 [2024-12-05 19:42:02.591158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:35.566 [2024-12-05 19:42:02.591165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:35.566 [2024-12-05 19:42:02.591172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:35.566 [2024-12-05 19:42:02.591179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:35.566 [2024-12-05 19:42:02.591186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:35.566 [2024-12-05 19:42:02.591192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.566 [2024-12-05 19:42:02.591200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:35.566 [2024-12-05 19:42:02.591206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:35.566 [2024-12-05 19:42:02.591214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.566 [2024-12-05 19:42:02.591220] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:35.566 [2024-12-05 19:42:02.591229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:35.566 [2024-12-05 19:42:02.591236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:35.566 [2024-12-05 19:42:02.591244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.566 [2024-12-05 19:42:02.591252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:35.566 [2024-12-05 19:42:02.591261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:35.566 [2024-12-05 19:42:02.591268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:35.566 [2024-12-05 19:42:02.591276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:35.566 [2024-12-05 19:42:02.591282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:35.566 [2024-12-05 19:42:02.591290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:35.566 [2024-12-05 19:42:02.591298] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:35.566 [2024-12-05 19:42:02.591310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:35.566 [2024-12-05 19:42:02.591320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:35.566 [2024-12-05 19:42:02.591329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:35.566 [2024-12-05 19:42:02.591336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:35.566 [2024-12-05 19:42:02.591345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:35.566 [2024-12-05 19:42:02.591352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:35.566 [2024-12-05 19:42:02.591362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:35.566 [2024-12-05 19:42:02.591369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:35.566 [2024-12-05 19:42:02.591377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:35.566 [2024-12-05 19:42:02.591385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:35.566 [2024-12-05 19:42:02.591395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:35.566 [2024-12-05 19:42:02.591402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:35.566 [2024-12-05 19:42:02.591410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:35.566 [2024-12-05 19:42:02.591417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:35.566 [2024-12-05 19:42:02.591426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:35.566 [2024-12-05 19:42:02.591433] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:35.566 [2024-12-05 19:42:02.591442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:35.566 [2024-12-05 19:42:02.591450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:35.566 [2024-12-05 19:42:02.591459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:35.566 [2024-12-05 19:42:02.591466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:35.566 [2024-12-05 19:42:02.591476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:35.566 [2024-12-05 19:42:02.591483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.566 [2024-12-05 19:42:02.591492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:35.566 [2024-12-05 19:42:02.591499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:23:35.566 [2024-12-05 19:42:02.591507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.566 [2024-12-05 19:42:02.591543] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:35.566 [2024-12-05 19:42:02.591556] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:38.858 [2024-12-05 19:42:05.846596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:05.846681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:38.858 [2024-12-05 19:42:05.846696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3255.041 ms 00:23:38.858 [2024-12-05 19:42:05.846707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:05.879490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:05.879558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.858 [2024-12-05 19:42:05.879572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.555 ms 00:23:38.858 [2024-12-05 19:42:05.879585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:05.879817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:05.879882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:38.858 [2024-12-05 19:42:05.879896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:38.858 [2024-12-05 19:42:05.879917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:05.920929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:05.920994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.858 [2024-12-05 19:42:05.921008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.942 ms 00:23:38.858 [2024-12-05 19:42:05.921018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:05.921064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:05.921079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.858 [2024-12-05 19:42:05.921087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:38.858 [2024-12-05 19:42:05.921103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:05.921517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:05.921565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.858 [2024-12-05 19:42:05.921583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:23:38.858 [2024-12-05 19:42:05.921599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:05.921749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:05.921768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.858 [2024-12-05 19:42:05.921779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:23:38.858 [2024-12-05 19:42:05.921790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:05.938294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:05.938353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.858 [2024-12-05 19:42:05.938365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.484 ms 00:23:38.858 [2024-12-05 19:42:05.938375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:05.958245] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:38.858 [2024-12-05 19:42:05.961143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:05.961187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:38.858 [2024-12-05 19:42:05.961203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.659 ms 00:23:38.858 [2024-12-05 19:42:05.961213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:06.036323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:06.036401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:38.858 [2024-12-05 19:42:06.036419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.035 ms 00:23:38.858 [2024-12-05 19:42:06.036428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:06.036649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:06.036693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:38.858 [2024-12-05 19:42:06.036708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:23:38.858 [2024-12-05 19:42:06.036718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:06.067708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:06.067776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:38.858 [2024-12-05 19:42:06.067793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.884 ms 00:23:38.858 [2024-12-05 19:42:06.067802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:06.095856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:06.095942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:38.858 [2024-12-05 19:42:06.095966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.967 ms 00:23:38.858 [2024-12-05 19:42:06.095980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.858 [2024-12-05 19:42:06.096955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.858 [2024-12-05 19:42:06.097002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:38.858 [2024-12-05 19:42:06.097022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:23:38.858 [2024-12-05 19:42:06.097038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.117 [2024-12-05 19:42:06.176401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.117 [2024-12-05 19:42:06.176466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:39.117 [2024-12-05 19:42:06.176487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.253 ms 00:23:39.117 [2024-12-05 19:42:06.176495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.117 [2024-12-05 19:42:06.204507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.117 [2024-12-05 19:42:06.204575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:39.117 [2024-12-05 19:42:06.204589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.869 ms 00:23:39.118 [2024-12-05 19:42:06.204597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.118 [2024-12-05 19:42:06.233500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.118 [2024-12-05 19:42:06.233567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:39.118 [2024-12-05 19:42:06.233582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.806 ms 00:23:39.118 [2024-12-05 19:42:06.233592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.118 [2024-12-05 19:42:06.260526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.118 [2024-12-05 19:42:06.260600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:39.118 [2024-12-05 19:42:06.260614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.853 ms 00:23:39.118 [2024-12-05 19:42:06.260622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.118 [2024-12-05 19:42:06.260699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.118 [2024-12-05 19:42:06.260709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:39.118 [2024-12-05 19:42:06.260722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:39.118 [2024-12-05 19:42:06.260730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.118 [2024-12-05 19:42:06.260832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.118 [2024-12-05 19:42:06.260844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:39.118 [2024-12-05 19:42:06.260854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:39.118 [2024-12-05 19:42:06.260862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.118 [2024-12-05 19:42:06.262857] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3684.123 ms, result 0 00:23:39.118 { 00:23:39.118 "name": "ftl0", 00:23:39.118 "uuid": "f2a2c721-9529-45cb-beae-3973f9aeaf2f" 00:23:39.118 } 00:23:39.118 19:42:06 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:39.118 19:42:06 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:39.377 19:42:06 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:23:39.377 19:42:06 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:39.637 [2024-12-05 19:42:06.685516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.637 [2024-12-05 19:42:06.685607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:39.637 [2024-12-05 19:42:06.685631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:39.637 [2024-12-05 19:42:06.685648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.637 [2024-12-05 19:42:06.685700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:39.637 [2024-12-05 19:42:06.688391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.637 [2024-12-05 19:42:06.688433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:39.637 [2024-12-05 19:42:06.688448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.659 ms 00:23:39.637 [2024-12-05 19:42:06.688457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.637 [2024-12-05 19:42:06.688737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.637 [2024-12-05 19:42:06.688788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:39.637 [2024-12-05 19:42:06.688800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:23:39.637 [2024-12-05 19:42:06.688808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.637 [2024-12-05 19:42:06.692255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.637 [2024-12-05 19:42:06.692295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:39.637 [2024-12-05 19:42:06.692308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.422 ms 00:23:39.637 [2024-12-05 19:42:06.692316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.637 [2024-12-05 19:42:06.698512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.637 [2024-12-05 19:42:06.698552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:39.637 [2024-12-05 19:42:06.698567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.167 ms 00:23:39.637 [2024-12-05 19:42:06.698574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.637 [2024-12-05 19:42:06.724026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.637 [2024-12-05 19:42:06.724088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:39.637 [2024-12-05 19:42:06.724105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.350 ms 00:23:39.637 [2024-12-05 19:42:06.724115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.638 [2024-12-05 19:42:06.741249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.638 [2024-12-05 19:42:06.741333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:39.638 [2024-12-05 19:42:06.741357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.051 ms 00:23:39.638 [2024-12-05 19:42:06.741369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.638 [2024-12-05 19:42:06.741639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.638 [2024-12-05 19:42:06.741661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:39.638 [2024-12-05 19:42:06.741695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:23:39.638 [2024-12-05 19:42:06.741708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.638 [2024-12-05 19:42:06.771301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.638 [2024-12-05 19:42:06.771388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:39.638 [2024-12-05 19:42:06.771412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.544 ms 00:23:39.638 [2024-12-05 19:42:06.771421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.638 [2024-12-05 19:42:06.803366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.638 [2024-12-05 19:42:06.803435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:39.638 [2024-12-05 19:42:06.803450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.828 ms 00:23:39.638 [2024-12-05 19:42:06.803458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.638 [2024-12-05 19:42:06.829058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.638 [2024-12-05 19:42:06.829122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:39.638 [2024-12-05 19:42:06.829136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.511 ms 00:23:39.638 [2024-12-05 19:42:06.829145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.638 [2024-12-05 19:42:06.855772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.638 [2024-12-05 19:42:06.855841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:39.638 [2024-12-05 19:42:06.855857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.480 ms 00:23:39.638 [2024-12-05 19:42:06.855865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.638 [2024-12-05 19:42:06.855955] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:39.638 [2024-12-05 19:42:06.855974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.855989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.855998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:39.638 [2024-12-05 19:42:06.856593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:39.639 [2024-12-05 19:42:06.856894] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:39.639 [2024-12-05 19:42:06.856909] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f2a2c721-9529-45cb-beae-3973f9aeaf2f 00:23:39.639 [2024-12-05 19:42:06.856921] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:39.639 [2024-12-05 19:42:06.856937] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:39.639 [2024-12-05 19:42:06.856948] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:39.639 [2024-12-05 19:42:06.856958] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:39.639 [2024-12-05 19:42:06.856965] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:39.639 [2024-12-05 19:42:06.856974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:39.639 [2024-12-05 19:42:06.856981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:39.639 [2024-12-05 19:42:06.856990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:39.639 [2024-12-05 19:42:06.856998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:39.639 [2024-12-05 19:42:06.857007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.639 [2024-12-05 19:42:06.857015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:39.639 [2024-12-05 19:42:06.857025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:23:39.639 [2024-12-05 19:42:06.857035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.639 [2024-12-05 19:42:06.870231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.639 [2024-12-05 19:42:06.870284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:39.639 [2024-12-05 19:42:06.870297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.114 ms 00:23:39.639 [2024-12-05 19:42:06.870305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.639 [2024-12-05 19:42:06.870667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.639 [2024-12-05 19:42:06.870692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:39.639 [2024-12-05 19:42:06.870705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:23:39.639 [2024-12-05 19:42:06.870712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.897 [2024-12-05 19:42:06.913615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.898 [2024-12-05 19:42:06.913687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:39.898 [2024-12-05 19:42:06.913702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.898 [2024-12-05 19:42:06.913711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.898 [2024-12-05 19:42:06.913788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.898 [2024-12-05 19:42:06.913797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:39.898 [2024-12-05 19:42:06.913808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.898 [2024-12-05 19:42:06.913816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.898 [2024-12-05 19:42:06.913940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.898 [2024-12-05 19:42:06.913952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:39.898 [2024-12-05 19:42:06.913962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.898 [2024-12-05 19:42:06.913969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.898 [2024-12-05 19:42:06.913991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.898 [2024-12-05 19:42:06.913999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:39.898 [2024-12-05 19:42:06.914009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.898 [2024-12-05 19:42:06.914018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.898 [2024-12-05 19:42:06.996033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.898 [2024-12-05 19:42:06.996099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:39.898 [2024-12-05 19:42:06.996113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.898 [2024-12-05 19:42:06.996121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.898 [2024-12-05 19:42:07.068997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.898 [2024-12-05 19:42:07.069058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:39.898 [2024-12-05 19:42:07.069074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.898 [2024-12-05 19:42:07.069085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.898 [2024-12-05 19:42:07.069175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.898 [2024-12-05 19:42:07.069185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:39.898 [2024-12-05 19:42:07.069195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.898 [2024-12-05 19:42:07.069202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.898 [2024-12-05 19:42:07.069266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.898 [2024-12-05 19:42:07.069276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:39.898 [2024-12-05 19:42:07.069286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.898 [2024-12-05 19:42:07.069293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.898 [2024-12-05 19:42:07.069390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.898 [2024-12-05 19:42:07.069399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:39.898 [2024-12-05 19:42:07.069409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.898 [2024-12-05 19:42:07.069417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.898 [2024-12-05 19:42:07.069449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.898 [2024-12-05 19:42:07.069458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:39.898 [2024-12-05 19:42:07.069467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.898 [2024-12-05 19:42:07.069475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.898 [2024-12-05 19:42:07.069512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.898 [2024-12-05 19:42:07.069521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:39.898 [2024-12-05 19:42:07.069531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.898 [2024-12-05 19:42:07.069538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.898 [2024-12-05 19:42:07.069583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.898 [2024-12-05 19:42:07.069593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:39.898 [2024-12-05 19:42:07.069603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.898 [2024-12-05 19:42:07.069610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.898 [2024-12-05 19:42:07.069757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.226 ms, result 0 00:23:39.898 true 00:23:39.898 19:42:07 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77875 00:23:39.898 19:42:07 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77875 ']' 00:23:39.898 19:42:07 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77875 00:23:39.898 19:42:07 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:23:39.898 19:42:07 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.898 19:42:07 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77875 00:23:39.898 19:42:07 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.898 19:42:07 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.898 killing process with pid 77875 00:23:39.898 19:42:07 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77875' 00:23:39.898 19:42:07 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77875 00:23:39.898 19:42:07 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77875 00:23:54.815 19:42:19 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:23:56.714 262144+0 records in 00:23:56.714 262144+0 records out 00:23:56.714 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.84124 s, 280 MB/s 00:23:56.714 19:42:23 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:58.616 19:42:25 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:58.616 [2024-12-05 19:42:25.658616] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:23:58.616 [2024-12-05 19:42:25.658724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78100 ] 00:23:58.616 [2024-12-05 19:42:25.813399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.879 [2024-12-05 19:42:25.925112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.138 [2024-12-05 19:42:26.197392] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:59.138 [2024-12-05 19:42:26.197463] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:59.138 [2024-12-05 19:42:26.350816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.138 [2024-12-05 19:42:26.350880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:59.138 [2024-12-05 19:42:26.350893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:59.138 [2024-12-05 19:42:26.350902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.138 [2024-12-05 19:42:26.350955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.138 [2024-12-05 19:42:26.350968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:59.138 [2024-12-05 19:42:26.350977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:59.138 [2024-12-05 19:42:26.350985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.138 [2024-12-05 19:42:26.351006] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:59.138 [2024-12-05 19:42:26.351767] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:59.138 [2024-12-05 19:42:26.351795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.138 [2024-12-05 19:42:26.351803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:59.138 [2024-12-05 19:42:26.351812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:23:59.138 [2024-12-05 19:42:26.351820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.138 [2024-12-05 19:42:26.352952] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:59.138 [2024-12-05 19:42:26.365226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.138 [2024-12-05 19:42:26.365272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:59.138 [2024-12-05 19:42:26.365284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.274 ms 00:23:59.139 [2024-12-05 19:42:26.365293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.139 [2024-12-05 19:42:26.365366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.139 [2024-12-05 19:42:26.365376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:59.139 [2024-12-05 19:42:26.365384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:59.139 [2024-12-05 19:42:26.365391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.139 [2024-12-05 19:42:26.370492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.139 [2024-12-05 19:42:26.370530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:59.139 [2024-12-05 19:42:26.370541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.028 ms 00:23:59.139 [2024-12-05 19:42:26.370553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.139 [2024-12-05 19:42:26.370627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.139 [2024-12-05 19:42:26.370637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:59.139 [2024-12-05 19:42:26.370645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:59.139 [2024-12-05 19:42:26.370652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.139 [2024-12-05 19:42:26.370716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.139 [2024-12-05 19:42:26.370727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:59.139 [2024-12-05 19:42:26.370735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:59.139 [2024-12-05 19:42:26.370742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.139 [2024-12-05 19:42:26.370769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:59.139 [2024-12-05 19:42:26.374048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.139 [2024-12-05 19:42:26.374077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:59.139 [2024-12-05 19:42:26.374090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.286 ms 00:23:59.139 [2024-12-05 19:42:26.374097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.139 [2024-12-05 19:42:26.374130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.139 [2024-12-05 19:42:26.374139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:59.139 [2024-12-05 19:42:26.374147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:59.139 [2024-12-05 19:42:26.374154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.139 [2024-12-05 19:42:26.374176] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:59.139 [2024-12-05 19:42:26.374194] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:59.139 [2024-12-05 19:42:26.374230] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:59.139 [2024-12-05 19:42:26.374246] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:59.139 [2024-12-05 19:42:26.374350] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:59.139 [2024-12-05 19:42:26.374368] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:59.139 [2024-12-05 19:42:26.374379] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:59.139 [2024-12-05 19:42:26.374389] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:59.139 [2024-12-05 19:42:26.374400] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:59.139 [2024-12-05 19:42:26.374408] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:59.139 [2024-12-05 19:42:26.374416] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:59.139 [2024-12-05 19:42:26.374426] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:59.139 [2024-12-05 19:42:26.374433] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:59.139 [2024-12-05 19:42:26.374440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.139 [2024-12-05 19:42:26.374448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:59.139 [2024-12-05 19:42:26.374456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:23:59.139 [2024-12-05 19:42:26.374463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.139 [2024-12-05 19:42:26.374546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.139 [2024-12-05 19:42:26.374554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:59.139 [2024-12-05 19:42:26.374562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:59.139 [2024-12-05 19:42:26.374569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.139 [2024-12-05 19:42:26.374707] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:59.139 [2024-12-05 19:42:26.374725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:59.139 [2024-12-05 19:42:26.374734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.139 [2024-12-05 19:42:26.374742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.139 [2024-12-05 19:42:26.374755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:59.139 [2024-12-05 19:42:26.374761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:59.139 [2024-12-05 19:42:26.374768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:59.139 [2024-12-05 19:42:26.374775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:59.139 [2024-12-05 19:42:26.374782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:59.139 [2024-12-05 19:42:26.374789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.139 [2024-12-05 19:42:26.374796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:59.139 [2024-12-05 19:42:26.374803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:59.139 [2024-12-05 19:42:26.374809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.139 [2024-12-05 19:42:26.374821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:59.139 [2024-12-05 19:42:26.374828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:59.139 [2024-12-05 19:42:26.374835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.139 [2024-12-05 19:42:26.374842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:59.139 [2024-12-05 19:42:26.374849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:59.139 [2024-12-05 19:42:26.374856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.139 [2024-12-05 19:42:26.374863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:59.139 [2024-12-05 19:42:26.374870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:59.139 [2024-12-05 19:42:26.374876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.139 [2024-12-05 19:42:26.374883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:59.139 [2024-12-05 19:42:26.374889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:59.139 [2024-12-05 19:42:26.374896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.139 [2024-12-05 19:42:26.374902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:59.139 [2024-12-05 19:42:26.374908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:59.139 [2024-12-05 19:42:26.374914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.139 [2024-12-05 19:42:26.374920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:59.139 [2024-12-05 19:42:26.374927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:59.139 [2024-12-05 19:42:26.374933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.139 [2024-12-05 19:42:26.374940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:59.139 [2024-12-05 19:42:26.374946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:59.139 [2024-12-05 19:42:26.374953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.139 [2024-12-05 19:42:26.374959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:59.139 [2024-12-05 19:42:26.374966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:59.139 [2024-12-05 19:42:26.374972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.139 [2024-12-05 19:42:26.374979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:59.139 [2024-12-05 19:42:26.374985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:59.139 [2024-12-05 19:42:26.374991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.139 [2024-12-05 19:42:26.374998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:59.139 [2024-12-05 19:42:26.375005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:59.139 [2024-12-05 19:42:26.375012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.139 [2024-12-05 19:42:26.375018] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:59.139 [2024-12-05 19:42:26.375026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:59.139 [2024-12-05 19:42:26.375033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.139 [2024-12-05 19:42:26.375040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.139 [2024-12-05 19:42:26.375048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:59.139 [2024-12-05 19:42:26.375054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:59.139 [2024-12-05 19:42:26.375060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:59.139 [2024-12-05 19:42:26.375067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:59.139 [2024-12-05 19:42:26.375073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:59.139 [2024-12-05 19:42:26.375080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:59.139 [2024-12-05 19:42:26.375088] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:59.139 [2024-12-05 19:42:26.375097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.139 [2024-12-05 19:42:26.375110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:59.140 [2024-12-05 19:42:26.375117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:59.140 [2024-12-05 19:42:26.375124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:59.140 [2024-12-05 19:42:26.375131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:59.140 [2024-12-05 19:42:26.375138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:59.140 [2024-12-05 19:42:26.375144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:59.140 [2024-12-05 19:42:26.375151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:59.140 [2024-12-05 19:42:26.375159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:59.140 [2024-12-05 19:42:26.375166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:59.140 [2024-12-05 19:42:26.375173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:59.140 [2024-12-05 19:42:26.375180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:59.140 [2024-12-05 19:42:26.375187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:59.140 [2024-12-05 19:42:26.375194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:59.140 [2024-12-05 19:42:26.375201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:59.140 [2024-12-05 19:42:26.375207] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:59.140 [2024-12-05 19:42:26.375215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.140 [2024-12-05 19:42:26.375223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:59.140 [2024-12-05 19:42:26.375230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:59.140 [2024-12-05 19:42:26.375237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:59.140 [2024-12-05 19:42:26.375244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:59.140 [2024-12-05 19:42:26.375252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.140 [2024-12-05 19:42:26.375259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:59.140 [2024-12-05 19:42:26.375266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:23:59.140 [2024-12-05 19:42:26.375274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.400878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.400927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.399 [2024-12-05 19:42:26.400939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.559 ms 00:23:59.399 [2024-12-05 19:42:26.400950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.401045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.401053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:59.399 [2024-12-05 19:42:26.401061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:59.399 [2024-12-05 19:42:26.401069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.439869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.439925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:59.399 [2024-12-05 19:42:26.439940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.736 ms 00:23:59.399 [2024-12-05 19:42:26.439948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.440004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.440014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:59.399 [2024-12-05 19:42:26.440025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:59.399 [2024-12-05 19:42:26.440033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.440400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.440426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:59.399 [2024-12-05 19:42:26.440436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:23:59.399 [2024-12-05 19:42:26.440443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.440564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.440580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:59.399 [2024-12-05 19:42:26.440589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:59.399 [2024-12-05 19:42:26.440601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.453413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.453454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:59.399 [2024-12-05 19:42:26.453468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.792 ms 00:23:59.399 [2024-12-05 19:42:26.453475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.465760] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:59.399 [2024-12-05 19:42:26.465817] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:59.399 [2024-12-05 19:42:26.465830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.465839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:59.399 [2024-12-05 19:42:26.465850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.231 ms 00:23:59.399 [2024-12-05 19:42:26.465858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.490270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.490333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:59.399 [2024-12-05 19:42:26.490346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.347 ms 00:23:59.399 [2024-12-05 19:42:26.490356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.502493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.502542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:59.399 [2024-12-05 19:42:26.502553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.071 ms 00:23:59.399 [2024-12-05 19:42:26.502561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.514375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.514424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:59.399 [2024-12-05 19:42:26.514436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.759 ms 00:23:59.399 [2024-12-05 19:42:26.514444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.515109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.515135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:59.399 [2024-12-05 19:42:26.515144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:23:59.399 [2024-12-05 19:42:26.515154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.571076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.571133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:59.399 [2024-12-05 19:42:26.571146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.904 ms 00:23:59.399 [2024-12-05 19:42:26.571162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.581980] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:59.399 [2024-12-05 19:42:26.584628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.584662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:59.399 [2024-12-05 19:42:26.584684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.403 ms 00:23:59.399 [2024-12-05 19:42:26.584693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.584805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.584816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:59.399 [2024-12-05 19:42:26.584825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:59.399 [2024-12-05 19:42:26.584832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.584901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.584918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:59.399 [2024-12-05 19:42:26.584926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:59.399 [2024-12-05 19:42:26.584934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.584952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.584960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:59.399 [2024-12-05 19:42:26.584968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:59.399 [2024-12-05 19:42:26.584976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.585005] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:59.399 [2024-12-05 19:42:26.585016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.585024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:59.399 [2024-12-05 19:42:26.585031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:59.399 [2024-12-05 19:42:26.585038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.608690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.608750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:59.399 [2024-12-05 19:42:26.608763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.633 ms 00:23:59.399 [2024-12-05 19:42:26.608777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.608869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.399 [2024-12-05 19:42:26.608878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:59.399 [2024-12-05 19:42:26.608887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:59.399 [2024-12-05 19:42:26.608894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.399 [2024-12-05 19:42:26.610329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.093 ms, result 0 00:24:00.771  [2024-12-05T19:42:28.958Z] Copying: 42/1024 [MB] (42 MBps) [2024-12-05T19:42:29.891Z] Copying: 84/1024 [MB] (42 MBps) [2024-12-05T19:42:30.824Z] Copying: 129/1024 [MB] (44 MBps) [2024-12-05T19:42:31.758Z] Copying: 176/1024 [MB] (46 MBps) [2024-12-05T19:42:32.693Z] Copying: 224/1024 [MB] (48 MBps) [2024-12-05T19:42:33.626Z] Copying: 273/1024 [MB] (49 MBps) [2024-12-05T19:42:35.000Z] Copying: 316/1024 [MB] (42 MBps) [2024-12-05T19:42:35.932Z] Copying: 361/1024 [MB] (44 MBps) [2024-12-05T19:42:36.865Z] Copying: 403/1024 [MB] (42 MBps) [2024-12-05T19:42:37.797Z] Copying: 449/1024 [MB] (45 MBps) [2024-12-05T19:42:38.731Z] Copying: 495/1024 [MB] (45 MBps) [2024-12-05T19:42:39.666Z] Copying: 544/1024 [MB] (49 MBps) [2024-12-05T19:42:40.663Z] Copying: 591/1024 [MB] (47 MBps) [2024-12-05T19:42:42.035Z] Copying: 638/1024 [MB] (46 MBps) [2024-12-05T19:42:42.974Z] Copying: 685/1024 [MB] (46 MBps) [2024-12-05T19:42:43.907Z] Copying: 730/1024 [MB] (45 MBps) [2024-12-05T19:42:44.842Z] Copying: 776/1024 [MB] (45 MBps) [2024-12-05T19:42:45.843Z] Copying: 819/1024 [MB] (42 MBps) [2024-12-05T19:42:46.777Z] Copying: 858/1024 [MB] (39 MBps) [2024-12-05T19:42:47.708Z] Copying: 902/1024 [MB] (43 MBps) [2024-12-05T19:42:48.642Z] Copying: 946/1024 [MB] (43 MBps) [2024-12-05T19:42:49.576Z] Copying: 991/1024 [MB] (45 MBps) [2024-12-05T19:42:49.576Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-12-05 19:42:49.394857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.321 [2024-12-05 19:42:49.394911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:22.321 [2024-12-05 19:42:49.394924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:22.321 [2024-12-05 19:42:49.394933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.321 [2024-12-05 19:42:49.394953] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:22.321 [2024-12-05 19:42:49.397823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.321 [2024-12-05 19:42:49.397951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:22.321 [2024-12-05 19:42:49.398023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.854 ms 00:24:22.321 [2024-12-05 19:42:49.398046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.321 [2024-12-05 19:42:49.399456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.321 [2024-12-05 19:42:49.399575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:22.321 [2024-12-05 19:42:49.399634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.371 ms 00:24:22.321 [2024-12-05 19:42:49.399657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.321 [2024-12-05 19:42:49.413020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.321 [2024-12-05 19:42:49.413227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:22.321 [2024-12-05 19:42:49.413287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.316 ms 00:24:22.321 [2024-12-05 19:42:49.413310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.321 [2024-12-05 19:42:49.419494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.321 [2024-12-05 19:42:49.419665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:22.321 [2024-12-05 19:42:49.419736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:24:22.321 [2024-12-05 19:42:49.419758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.321 [2024-12-05 19:42:49.445011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.321 [2024-12-05 19:42:49.445242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:22.321 [2024-12-05 19:42:49.445261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.171 ms 00:24:22.321 [2024-12-05 19:42:49.445270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.321 [2024-12-05 19:42:49.459786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.321 [2024-12-05 19:42:49.459852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:22.321 [2024-12-05 19:42:49.459866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.471 ms 00:24:22.321 [2024-12-05 19:42:49.459875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.321 [2024-12-05 19:42:49.460037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.321 [2024-12-05 19:42:49.460050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:22.321 [2024-12-05 19:42:49.460059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:24:22.321 [2024-12-05 19:42:49.460066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.321 [2024-12-05 19:42:49.484895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.321 [2024-12-05 19:42:49.484950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:22.321 [2024-12-05 19:42:49.484964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.813 ms 00:24:22.321 [2024-12-05 19:42:49.484973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.321 [2024-12-05 19:42:49.508796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.321 [2024-12-05 19:42:49.508849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:22.321 [2024-12-05 19:42:49.508862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.767 ms 00:24:22.321 [2024-12-05 19:42:49.508870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.321 [2024-12-05 19:42:49.532400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.321 [2024-12-05 19:42:49.532454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:22.321 [2024-12-05 19:42:49.532467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.472 ms 00:24:22.321 [2024-12-05 19:42:49.532474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.321 [2024-12-05 19:42:49.556478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.321 [2024-12-05 19:42:49.556530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:22.321 [2024-12-05 19:42:49.556543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.920 ms 00:24:22.321 [2024-12-05 19:42:49.556550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.321 [2024-12-05 19:42:49.556610] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:22.321 [2024-12-05 19:42:49.556626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:22.321 [2024-12-05 19:42:49.556877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.556993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:22.322 [2024-12-05 19:42:49.557406] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:22.322 [2024-12-05 19:42:49.557417] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f2a2c721-9529-45cb-beae-3973f9aeaf2f 00:24:22.322 [2024-12-05 19:42:49.557425] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:22.322 [2024-12-05 19:42:49.557432] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:22.322 [2024-12-05 19:42:49.557440] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:22.322 [2024-12-05 19:42:49.557447] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:22.322 [2024-12-05 19:42:49.557455] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:22.322 [2024-12-05 19:42:49.557468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:22.322 [2024-12-05 19:42:49.557475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:22.322 [2024-12-05 19:42:49.557482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:22.322 [2024-12-05 19:42:49.557489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:22.322 [2024-12-05 19:42:49.557496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.322 [2024-12-05 19:42:49.557504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:22.322 [2024-12-05 19:42:49.557512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:24:22.322 [2024-12-05 19:42:49.557519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.322 [2024-12-05 19:42:49.569991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.322 [2024-12-05 19:42:49.570045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:22.322 [2024-12-05 19:42:49.570058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.449 ms 00:24:22.322 [2024-12-05 19:42:49.570065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.322 [2024-12-05 19:42:49.570428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.322 [2024-12-05 19:42:49.570453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:22.322 [2024-12-05 19:42:49.570463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:24:22.322 [2024-12-05 19:42:49.570473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.603786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.581 [2024-12-05 19:42:49.603867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.581 [2024-12-05 19:42:49.603888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.581 [2024-12-05 19:42:49.603902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.603997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.581 [2024-12-05 19:42:49.604010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.581 [2024-12-05 19:42:49.604022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.581 [2024-12-05 19:42:49.604039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.604157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.581 [2024-12-05 19:42:49.604174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.581 [2024-12-05 19:42:49.604187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.581 [2024-12-05 19:42:49.604199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.604223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.581 [2024-12-05 19:42:49.604236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.581 [2024-12-05 19:42:49.604248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.581 [2024-12-05 19:42:49.604260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.683305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.581 [2024-12-05 19:42:49.683366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.581 [2024-12-05 19:42:49.683379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.581 [2024-12-05 19:42:49.683388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.747737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.581 [2024-12-05 19:42:49.747792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.581 [2024-12-05 19:42:49.747804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.581 [2024-12-05 19:42:49.747822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.747878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.581 [2024-12-05 19:42:49.747888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.581 [2024-12-05 19:42:49.747897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.581 [2024-12-05 19:42:49.747904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.747953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.581 [2024-12-05 19:42:49.747961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.581 [2024-12-05 19:42:49.747969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.581 [2024-12-05 19:42:49.747976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.748071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.581 [2024-12-05 19:42:49.748081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.581 [2024-12-05 19:42:49.748089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.581 [2024-12-05 19:42:49.748096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.748126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.581 [2024-12-05 19:42:49.748134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:22.581 [2024-12-05 19:42:49.748142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.581 [2024-12-05 19:42:49.748149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.748181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.581 [2024-12-05 19:42:49.748192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.581 [2024-12-05 19:42:49.748199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.581 [2024-12-05 19:42:49.748207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.748244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.581 [2024-12-05 19:42:49.748254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.581 [2024-12-05 19:42:49.748262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.581 [2024-12-05 19:42:49.748269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.581 [2024-12-05 19:42:49.748379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.495 ms, result 0 00:24:23.511 00:24:23.511 00:24:23.512 19:42:50 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:23.512 [2024-12-05 19:42:50.550076] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:24:23.512 [2024-12-05 19:42:50.550210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78363 ] 00:24:23.512 [2024-12-05 19:42:50.712979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.768 [2024-12-05 19:42:50.821208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.025 [2024-12-05 19:42:51.084385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:24.026 [2024-12-05 19:42:51.084456] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:24.026 [2024-12-05 19:42:51.237775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-12-05 19:42:51.238034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:24.026 [2024-12-05 19:42:51.238056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:24.026 [2024-12-05 19:42:51.238065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-12-05 19:42:51.238127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-12-05 19:42:51.238140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.026 [2024-12-05 19:42:51.238148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:24.026 [2024-12-05 19:42:51.238156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-12-05 19:42:51.238176] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:24.026 [2024-12-05 19:42:51.238980] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:24.026 [2024-12-05 19:42:51.239017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-12-05 19:42:51.239026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.026 [2024-12-05 19:42:51.239035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:24:24.026 [2024-12-05 19:42:51.239042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-12-05 19:42:51.240160] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:24.026 [2024-12-05 19:42:51.252687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-12-05 19:42:51.253028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:24.026 [2024-12-05 19:42:51.253047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.526 ms 00:24:24.026 [2024-12-05 19:42:51.253056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-12-05 19:42:51.253136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-12-05 19:42:51.253147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:24.026 [2024-12-05 19:42:51.253155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:24.026 [2024-12-05 19:42:51.253162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-12-05 19:42:51.258813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-12-05 19:42:51.258857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.026 [2024-12-05 19:42:51.258868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.562 ms 00:24:24.026 [2024-12-05 19:42:51.258881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-12-05 19:42:51.258965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-12-05 19:42:51.258974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.026 [2024-12-05 19:42:51.258982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:24.026 [2024-12-05 19:42:51.258989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-12-05 19:42:51.259046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-12-05 19:42:51.259056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:24.026 [2024-12-05 19:42:51.259064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:24.026 [2024-12-05 19:42:51.259071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-12-05 19:42:51.259097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:24.026 [2024-12-05 19:42:51.263118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-12-05 19:42:51.263173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.026 [2024-12-05 19:42:51.263189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.026 ms 00:24:24.026 [2024-12-05 19:42:51.263196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-12-05 19:42:51.263242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-12-05 19:42:51.263251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:24.026 [2024-12-05 19:42:51.263260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:24.026 [2024-12-05 19:42:51.263267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-12-05 19:42:51.263329] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:24.026 [2024-12-05 19:42:51.263350] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:24.026 [2024-12-05 19:42:51.263384] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:24.026 [2024-12-05 19:42:51.263402] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:24.026 [2024-12-05 19:42:51.263509] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:24.026 [2024-12-05 19:42:51.263520] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:24.026 [2024-12-05 19:42:51.263533] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:24.026 [2024-12-05 19:42:51.263548] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:24.026 [2024-12-05 19:42:51.263562] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:24.026 [2024-12-05 19:42:51.263574] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:24.026 [2024-12-05 19:42:51.263585] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:24.026 [2024-12-05 19:42:51.263596] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:24.026 [2024-12-05 19:42:51.263603] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:24.026 [2024-12-05 19:42:51.263611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-12-05 19:42:51.263619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:24.026 [2024-12-05 19:42:51.263626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:24:24.026 [2024-12-05 19:42:51.263634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-12-05 19:42:51.263742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.026 [2024-12-05 19:42:51.263751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:24.026 [2024-12-05 19:42:51.263759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:24:24.026 [2024-12-05 19:42:51.263766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.026 [2024-12-05 19:42:51.263876] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:24.026 [2024-12-05 19:42:51.263887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:24.026 [2024-12-05 19:42:51.263896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:24.026 [2024-12-05 19:42:51.263904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.026 [2024-12-05 19:42:51.263911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:24.026 [2024-12-05 19:42:51.263918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:24.026 [2024-12-05 19:42:51.263924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:24.026 [2024-12-05 19:42:51.263931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:24.026 [2024-12-05 19:42:51.263938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:24.026 [2024-12-05 19:42:51.263945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:24.026 [2024-12-05 19:42:51.263952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:24.026 [2024-12-05 19:42:51.263959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:24.026 [2024-12-05 19:42:51.263965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:24.026 [2024-12-05 19:42:51.263978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:24.026 [2024-12-05 19:42:51.263985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:24.026 [2024-12-05 19:42:51.263991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.026 [2024-12-05 19:42:51.263998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:24.026 [2024-12-05 19:42:51.264004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:24.026 [2024-12-05 19:42:51.264011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.026 [2024-12-05 19:42:51.264018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:24.026 [2024-12-05 19:42:51.264024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:24.026 [2024-12-05 19:42:51.264030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.026 [2024-12-05 19:42:51.264037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:24.026 [2024-12-05 19:42:51.264043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:24.026 [2024-12-05 19:42:51.264050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.026 [2024-12-05 19:42:51.264056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:24.026 [2024-12-05 19:42:51.264062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:24.026 [2024-12-05 19:42:51.264070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.026 [2024-12-05 19:42:51.264077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:24.026 [2024-12-05 19:42:51.264083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:24.026 [2024-12-05 19:42:51.264089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.026 [2024-12-05 19:42:51.264095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:24.026 [2024-12-05 19:42:51.264102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:24.026 [2024-12-05 19:42:51.264108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:24.026 [2024-12-05 19:42:51.264115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:24.026 [2024-12-05 19:42:51.264121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:24.026 [2024-12-05 19:42:51.264127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:24.026 [2024-12-05 19:42:51.264134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:24.026 [2024-12-05 19:42:51.264140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:24.026 [2024-12-05 19:42:51.264147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.027 [2024-12-05 19:42:51.264153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:24.027 [2024-12-05 19:42:51.264160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:24.027 [2024-12-05 19:42:51.264166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.027 [2024-12-05 19:42:51.264173] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:24.027 [2024-12-05 19:42:51.264180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:24.027 [2024-12-05 19:42:51.264187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:24.027 [2024-12-05 19:42:51.264194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.027 [2024-12-05 19:42:51.264201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:24.027 [2024-12-05 19:42:51.264207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:24.027 [2024-12-05 19:42:51.264214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:24.027 [2024-12-05 19:42:51.264220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:24.027 [2024-12-05 19:42:51.264226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:24.027 [2024-12-05 19:42:51.264232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:24.027 [2024-12-05 19:42:51.264240] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:24.027 [2024-12-05 19:42:51.264249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:24.027 [2024-12-05 19:42:51.264260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:24.027 [2024-12-05 19:42:51.264267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:24.027 [2024-12-05 19:42:51.264274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:24.027 [2024-12-05 19:42:51.264281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:24.027 [2024-12-05 19:42:51.264288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:24.027 [2024-12-05 19:42:51.264295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:24.027 [2024-12-05 19:42:51.264303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:24.027 [2024-12-05 19:42:51.264310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:24.027 [2024-12-05 19:42:51.264317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:24.027 [2024-12-05 19:42:51.264323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:24.027 [2024-12-05 19:42:51.264330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:24.027 [2024-12-05 19:42:51.264338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:24.027 [2024-12-05 19:42:51.264344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:24.027 [2024-12-05 19:42:51.264351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:24.027 [2024-12-05 19:42:51.264358] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:24.027 [2024-12-05 19:42:51.264366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:24.027 [2024-12-05 19:42:51.264374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:24.027 [2024-12-05 19:42:51.264381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:24.027 [2024-12-05 19:42:51.264388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:24.027 [2024-12-05 19:42:51.264395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:24.027 [2024-12-05 19:42:51.264403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.027 [2024-12-05 19:42:51.264410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:24.027 [2024-12-05 19:42:51.264417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:24:24.027 [2024-12-05 19:42:51.264423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.284 [2024-12-05 19:42:51.290983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.284 [2024-12-05 19:42:51.291037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.284 [2024-12-05 19:42:51.291049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.513 ms 00:24:24.284 [2024-12-05 19:42:51.291060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.284 [2024-12-05 19:42:51.291151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.284 [2024-12-05 19:42:51.291159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:24.284 [2024-12-05 19:42:51.291167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:24.284 [2024-12-05 19:42:51.291174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.284 [2024-12-05 19:42:51.333679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.284 [2024-12-05 19:42:51.333739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.284 [2024-12-05 19:42:51.333753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.428 ms 00:24:24.284 [2024-12-05 19:42:51.333762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.284 [2024-12-05 19:42:51.333821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.284 [2024-12-05 19:42:51.333832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:24.284 [2024-12-05 19:42:51.333845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:24.284 [2024-12-05 19:42:51.333853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.284 [2024-12-05 19:42:51.334256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.284 [2024-12-05 19:42:51.334288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:24.285 [2024-12-05 19:42:51.334298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:24:24.285 [2024-12-05 19:42:51.334305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.334439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.334448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:24.285 [2024-12-05 19:42:51.334461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:24:24.285 [2024-12-05 19:42:51.334468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.347686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.347738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:24.285 [2024-12-05 19:42:51.347750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.198 ms 00:24:24.285 [2024-12-05 19:42:51.347758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.360427] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:24.285 [2024-12-05 19:42:51.360474] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:24.285 [2024-12-05 19:42:51.360487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.360496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:24.285 [2024-12-05 19:42:51.360506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.613 ms 00:24:24.285 [2024-12-05 19:42:51.360513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.385130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.385384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:24.285 [2024-12-05 19:42:51.385404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.548 ms 00:24:24.285 [2024-12-05 19:42:51.385413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.398446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.398510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:24.285 [2024-12-05 19:42:51.398525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.585 ms 00:24:24.285 [2024-12-05 19:42:51.398533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.411050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.411114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:24.285 [2024-12-05 19:42:51.411129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.445 ms 00:24:24.285 [2024-12-05 19:42:51.411137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.411863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.412000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:24.285 [2024-12-05 19:42:51.412021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:24:24.285 [2024-12-05 19:42:51.412028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.468990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.469051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:24.285 [2024-12-05 19:42:51.469074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.930 ms 00:24:24.285 [2024-12-05 19:42:51.469083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.480190] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:24.285 [2024-12-05 19:42:51.482955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.483121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:24.285 [2024-12-05 19:42:51.483139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.797 ms 00:24:24.285 [2024-12-05 19:42:51.483147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.483266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.483277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:24.285 [2024-12-05 19:42:51.483289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:24.285 [2024-12-05 19:42:51.483296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.483359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.483370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:24.285 [2024-12-05 19:42:51.483378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:24.285 [2024-12-05 19:42:51.483385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.483403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.483411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:24.285 [2024-12-05 19:42:51.483419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:24.285 [2024-12-05 19:42:51.483426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.483457] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:24.285 [2024-12-05 19:42:51.483467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.483475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:24.285 [2024-12-05 19:42:51.483483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:24.285 [2024-12-05 19:42:51.483490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.508086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.508143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:24.285 [2024-12-05 19:42:51.508161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.578 ms 00:24:24.285 [2024-12-05 19:42:51.508168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.508264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.285 [2024-12-05 19:42:51.508275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:24.285 [2024-12-05 19:42:51.508283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:24.285 [2024-12-05 19:42:51.508291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.285 [2024-12-05 19:42:51.509301] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 271.114 ms, result 0 00:24:25.655  [2024-12-05T19:42:53.842Z] Copying: 46/1024 [MB] (46 MBps) [2024-12-05T19:42:54.774Z] Copying: 94/1024 [MB] (48 MBps) [2024-12-05T19:42:55.704Z] Copying: 142/1024 [MB] (47 MBps) [2024-12-05T19:42:57.074Z] Copying: 187/1024 [MB] (45 MBps) [2024-12-05T19:42:58.006Z] Copying: 235/1024 [MB] (47 MBps) [2024-12-05T19:42:58.939Z] Copying: 278/1024 [MB] (43 MBps) [2024-12-05T19:42:59.872Z] Copying: 325/1024 [MB] (46 MBps) [2024-12-05T19:43:00.805Z] Copying: 372/1024 [MB] (47 MBps) [2024-12-05T19:43:01.750Z] Copying: 417/1024 [MB] (44 MBps) [2024-12-05T19:43:02.710Z] Copying: 462/1024 [MB] (45 MBps) [2024-12-05T19:43:04.086Z] Copying: 514/1024 [MB] (51 MBps) [2024-12-05T19:43:05.016Z] Copying: 560/1024 [MB] (45 MBps) [2024-12-05T19:43:05.946Z] Copying: 601/1024 [MB] (41 MBps) [2024-12-05T19:43:06.877Z] Copying: 645/1024 [MB] (43 MBps) [2024-12-05T19:43:07.809Z] Copying: 690/1024 [MB] (45 MBps) [2024-12-05T19:43:08.742Z] Copying: 739/1024 [MB] (48 MBps) [2024-12-05T19:43:10.109Z] Copying: 786/1024 [MB] (47 MBps) [2024-12-05T19:43:11.041Z] Copying: 831/1024 [MB] (44 MBps) [2024-12-05T19:43:11.973Z] Copying: 877/1024 [MB] (45 MBps) [2024-12-05T19:43:12.903Z] Copying: 923/1024 [MB] (46 MBps) [2024-12-05T19:43:13.968Z] Copying: 968/1024 [MB] (45 MBps) [2024-12-05T19:43:13.968Z] Copying: 1015/1024 [MB] (46 MBps) [2024-12-05T19:43:14.902Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-12-05 19:43:14.716927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.647 [2024-12-05 19:43:14.716992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:47.647 [2024-12-05 19:43:14.717006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:47.647 [2024-12-05 19:43:14.717014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.647 [2024-12-05 19:43:14.717036] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:47.647 [2024-12-05 19:43:14.719625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.647 [2024-12-05 19:43:14.719666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:47.647 [2024-12-05 19:43:14.719685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:24:47.647 [2024-12-05 19:43:14.719694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.647 [2024-12-05 19:43:14.719925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.647 [2024-12-05 19:43:14.719935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:47.647 [2024-12-05 19:43:14.719943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:24:47.647 [2024-12-05 19:43:14.719950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.647 [2024-12-05 19:43:14.723375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.647 [2024-12-05 19:43:14.723524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:47.647 [2024-12-05 19:43:14.723540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.413 ms 00:24:47.647 [2024-12-05 19:43:14.723553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.647 [2024-12-05 19:43:14.731309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.647 [2024-12-05 19:43:14.731427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:47.647 [2024-12-05 19:43:14.731485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.733 ms 00:24:47.647 [2024-12-05 19:43:14.731508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.647 [2024-12-05 19:43:14.757179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.647 [2024-12-05 19:43:14.757379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:47.647 [2024-12-05 19:43:14.757440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.590 ms 00:24:47.647 [2024-12-05 19:43:14.757463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.647 [2024-12-05 19:43:14.772958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.647 [2024-12-05 19:43:14.773161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:47.647 [2024-12-05 19:43:14.773226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.452 ms 00:24:47.647 [2024-12-05 19:43:14.773249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.647 [2024-12-05 19:43:14.773414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.647 [2024-12-05 19:43:14.773441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:47.647 [2024-12-05 19:43:14.773460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:47.647 [2024-12-05 19:43:14.773513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.647 [2024-12-05 19:43:14.797297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.647 [2024-12-05 19:43:14.797464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:47.647 [2024-12-05 19:43:14.797515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.752 ms 00:24:47.647 [2024-12-05 19:43:14.797537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.647 [2024-12-05 19:43:14.820344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.647 [2024-12-05 19:43:14.820489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:47.647 [2024-12-05 19:43:14.820542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.749 ms 00:24:47.647 [2024-12-05 19:43:14.820563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.647 [2024-12-05 19:43:14.843403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.647 [2024-12-05 19:43:14.843562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:47.647 [2024-12-05 19:43:14.843615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.771 ms 00:24:47.647 [2024-12-05 19:43:14.843636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.647 [2024-12-05 19:43:14.866376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.647 [2024-12-05 19:43:14.866532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:47.647 [2024-12-05 19:43:14.866580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.660 ms 00:24:47.647 [2024-12-05 19:43:14.866602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.647 [2024-12-05 19:43:14.866637] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:47.647 [2024-12-05 19:43:14.866678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.866715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.866745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.866772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.866844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.866874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.867930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.868075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.868105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.868133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.868163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.868295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.868325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:47.647 [2024-12-05 19:43:14.868354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.868992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:47.648 [2024-12-05 19:43:14.869234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:47.649 [2024-12-05 19:43:14.869241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:47.649 [2024-12-05 19:43:14.869248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:47.649 [2024-12-05 19:43:14.869256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:47.649 [2024-12-05 19:43:14.869272] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:47.649 [2024-12-05 19:43:14.869280] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f2a2c721-9529-45cb-beae-3973f9aeaf2f 00:24:47.649 [2024-12-05 19:43:14.869287] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:47.649 [2024-12-05 19:43:14.869295] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:47.649 [2024-12-05 19:43:14.869302] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:47.649 [2024-12-05 19:43:14.869310] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:47.649 [2024-12-05 19:43:14.869326] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:47.649 [2024-12-05 19:43:14.869334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:47.649 [2024-12-05 19:43:14.869341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:47.649 [2024-12-05 19:43:14.869347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:47.649 [2024-12-05 19:43:14.869354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:47.649 [2024-12-05 19:43:14.869364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.649 [2024-12-05 19:43:14.869372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:47.649 [2024-12-05 19:43:14.869382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.727 ms 00:24:47.649 [2024-12-05 19:43:14.869392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.649 [2024-12-05 19:43:14.881662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.649 [2024-12-05 19:43:14.881710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:47.649 [2024-12-05 19:43:14.881721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.241 ms 00:24:47.649 [2024-12-05 19:43:14.881729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.649 [2024-12-05 19:43:14.882094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.649 [2024-12-05 19:43:14.882113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:47.649 [2024-12-05 19:43:14.882127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:24:47.649 [2024-12-05 19:43:14.882135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.908 [2024-12-05 19:43:14.914415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.908 [2024-12-05 19:43:14.914466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.908 [2024-12-05 19:43:14.914476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.908 [2024-12-05 19:43:14.914484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.908 [2024-12-05 19:43:14.914546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.908 [2024-12-05 19:43:14.914554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.908 [2024-12-05 19:43:14.914566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.908 [2024-12-05 19:43:14.914574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.908 [2024-12-05 19:43:14.914638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.909 [2024-12-05 19:43:14.914648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.909 [2024-12-05 19:43:14.914655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.909 [2024-12-05 19:43:14.914663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.909 [2024-12-05 19:43:14.914697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.909 [2024-12-05 19:43:14.914706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.909 [2024-12-05 19:43:14.914713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.909 [2024-12-05 19:43:14.914724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.909 [2024-12-05 19:43:14.991863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.909 [2024-12-05 19:43:14.991914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.909 [2024-12-05 19:43:14.991926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.909 [2024-12-05 19:43:14.991934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.909 [2024-12-05 19:43:15.055409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.909 [2024-12-05 19:43:15.055459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:47.909 [2024-12-05 19:43:15.055477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.909 [2024-12-05 19:43:15.055485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.909 [2024-12-05 19:43:15.055559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.909 [2024-12-05 19:43:15.055568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:47.909 [2024-12-05 19:43:15.055577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.909 [2024-12-05 19:43:15.055584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.909 [2024-12-05 19:43:15.055618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.909 [2024-12-05 19:43:15.055626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:47.909 [2024-12-05 19:43:15.055634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.909 [2024-12-05 19:43:15.055641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.909 [2024-12-05 19:43:15.055753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.909 [2024-12-05 19:43:15.055763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:47.909 [2024-12-05 19:43:15.055771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.909 [2024-12-05 19:43:15.055779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.909 [2024-12-05 19:43:15.055806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.909 [2024-12-05 19:43:15.055815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:47.909 [2024-12-05 19:43:15.055822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.909 [2024-12-05 19:43:15.055830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.909 [2024-12-05 19:43:15.055867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.909 [2024-12-05 19:43:15.055876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:47.909 [2024-12-05 19:43:15.055883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.909 [2024-12-05 19:43:15.055890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.909 [2024-12-05 19:43:15.055930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.909 [2024-12-05 19:43:15.055940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:47.909 [2024-12-05 19:43:15.055947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.909 [2024-12-05 19:43:15.055954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.909 [2024-12-05 19:43:15.056063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.113 ms, result 0 00:24:48.851 00:24:48.851 00:24:48.851 19:43:15 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:50.750 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:50.750 19:43:17 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:50.750 [2024-12-05 19:43:17.968751] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:24:50.750 [2024-12-05 19:43:17.969076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78650 ] 00:24:51.008 [2024-12-05 19:43:18.120639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.008 [2024-12-05 19:43:18.230799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.266 [2024-12-05 19:43:18.453310] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:51.266 [2024-12-05 19:43:18.453373] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:51.525 [2024-12-05 19:43:18.601236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.525 [2024-12-05 19:43:18.601435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:51.525 [2024-12-05 19:43:18.601452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:51.525 [2024-12-05 19:43:18.601459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.525 [2024-12-05 19:43:18.601509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.525 [2024-12-05 19:43:18.601519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:51.525 [2024-12-05 19:43:18.601525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:51.525 [2024-12-05 19:43:18.601531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.525 [2024-12-05 19:43:18.601548] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:51.525 [2024-12-05 19:43:18.602108] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:51.525 [2024-12-05 19:43:18.602125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.525 [2024-12-05 19:43:18.602131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:51.525 [2024-12-05 19:43:18.602138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:24:51.525 [2024-12-05 19:43:18.602144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.525 [2024-12-05 19:43:18.603198] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:51.525 [2024-12-05 19:43:18.613199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.525 [2024-12-05 19:43:18.613228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:51.525 [2024-12-05 19:43:18.613238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.002 ms 00:24:51.525 [2024-12-05 19:43:18.613244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.525 [2024-12-05 19:43:18.613296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.525 [2024-12-05 19:43:18.613304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:51.525 [2024-12-05 19:43:18.613311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:51.525 [2024-12-05 19:43:18.613317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.525 [2024-12-05 19:43:18.617995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.525 [2024-12-05 19:43:18.618021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:51.525 [2024-12-05 19:43:18.618029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.639 ms 00:24:51.525 [2024-12-05 19:43:18.618038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.525 [2024-12-05 19:43:18.618094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.525 [2024-12-05 19:43:18.618102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:51.525 [2024-12-05 19:43:18.618108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:51.525 [2024-12-05 19:43:18.618114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.525 [2024-12-05 19:43:18.618160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.525 [2024-12-05 19:43:18.618168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:51.525 [2024-12-05 19:43:18.618175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:51.525 [2024-12-05 19:43:18.618181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.525 [2024-12-05 19:43:18.618202] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:51.525 [2024-12-05 19:43:18.620902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.525 [2024-12-05 19:43:18.620923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:51.525 [2024-12-05 19:43:18.620933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.706 ms 00:24:51.525 [2024-12-05 19:43:18.620939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.525 [2024-12-05 19:43:18.620965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.525 [2024-12-05 19:43:18.620971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:51.525 [2024-12-05 19:43:18.620978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:51.525 [2024-12-05 19:43:18.620984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.525 [2024-12-05 19:43:18.621000] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:51.525 [2024-12-05 19:43:18.621016] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:51.525 [2024-12-05 19:43:18.621043] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:51.525 [2024-12-05 19:43:18.621057] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:51.525 [2024-12-05 19:43:18.621138] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:51.525 [2024-12-05 19:43:18.621146] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:51.525 [2024-12-05 19:43:18.621154] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:51.525 [2024-12-05 19:43:18.621162] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:51.525 [2024-12-05 19:43:18.621168] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:51.526 [2024-12-05 19:43:18.621175] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:51.526 [2024-12-05 19:43:18.621181] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:51.526 [2024-12-05 19:43:18.621188] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:51.526 [2024-12-05 19:43:18.621194] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:51.526 [2024-12-05 19:43:18.621199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.526 [2024-12-05 19:43:18.621205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:51.526 [2024-12-05 19:43:18.621211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:24:51.526 [2024-12-05 19:43:18.621217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.526 [2024-12-05 19:43:18.621282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.526 [2024-12-05 19:43:18.621288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:51.526 [2024-12-05 19:43:18.621293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:51.526 [2024-12-05 19:43:18.621299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.526 [2024-12-05 19:43:18.621380] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:51.526 [2024-12-05 19:43:18.621388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:51.526 [2024-12-05 19:43:18.621394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.526 [2024-12-05 19:43:18.621399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:51.526 [2024-12-05 19:43:18.621411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:51.526 [2024-12-05 19:43:18.621422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:51.526 [2024-12-05 19:43:18.621428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.526 [2024-12-05 19:43:18.621438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:51.526 [2024-12-05 19:43:18.621443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:51.526 [2024-12-05 19:43:18.621449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.526 [2024-12-05 19:43:18.621460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:51.526 [2024-12-05 19:43:18.621466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:51.526 [2024-12-05 19:43:18.621472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:51.526 [2024-12-05 19:43:18.621482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:51.526 [2024-12-05 19:43:18.621487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:51.526 [2024-12-05 19:43:18.621498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.526 [2024-12-05 19:43:18.621509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:51.526 [2024-12-05 19:43:18.621514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.526 [2024-12-05 19:43:18.621524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:51.526 [2024-12-05 19:43:18.621529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.526 [2024-12-05 19:43:18.621540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:51.526 [2024-12-05 19:43:18.621545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.526 [2024-12-05 19:43:18.621555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:51.526 [2024-12-05 19:43:18.621560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.526 [2024-12-05 19:43:18.621571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:51.526 [2024-12-05 19:43:18.621576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:51.526 [2024-12-05 19:43:18.621581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.526 [2024-12-05 19:43:18.621586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:51.526 [2024-12-05 19:43:18.621591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:51.526 [2024-12-05 19:43:18.621596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:51.526 [2024-12-05 19:43:18.621606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:51.526 [2024-12-05 19:43:18.621612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621617] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:51.526 [2024-12-05 19:43:18.621623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:51.526 [2024-12-05 19:43:18.621629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.526 [2024-12-05 19:43:18.621635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.526 [2024-12-05 19:43:18.621641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:51.526 [2024-12-05 19:43:18.621646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:51.526 [2024-12-05 19:43:18.621651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:51.526 [2024-12-05 19:43:18.621656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:51.526 [2024-12-05 19:43:18.621661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:51.526 [2024-12-05 19:43:18.621666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:51.526 [2024-12-05 19:43:18.621688] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:51.526 [2024-12-05 19:43:18.621695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.526 [2024-12-05 19:43:18.621704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:51.526 [2024-12-05 19:43:18.621710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:51.526 [2024-12-05 19:43:18.621716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:51.526 [2024-12-05 19:43:18.621722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:51.526 [2024-12-05 19:43:18.621727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:51.526 [2024-12-05 19:43:18.621733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:51.526 [2024-12-05 19:43:18.621738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:51.526 [2024-12-05 19:43:18.621744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:51.526 [2024-12-05 19:43:18.621749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:51.526 [2024-12-05 19:43:18.621755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:51.526 [2024-12-05 19:43:18.621761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:51.526 [2024-12-05 19:43:18.621772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:51.526 [2024-12-05 19:43:18.621777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:51.526 [2024-12-05 19:43:18.621783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:51.526 [2024-12-05 19:43:18.621789] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:51.526 [2024-12-05 19:43:18.621795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.526 [2024-12-05 19:43:18.621801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:51.526 [2024-12-05 19:43:18.621806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:51.526 [2024-12-05 19:43:18.621812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:51.526 [2024-12-05 19:43:18.621817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:51.526 [2024-12-05 19:43:18.621823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.526 [2024-12-05 19:43:18.621829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:51.526 [2024-12-05 19:43:18.621834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:24:51.526 [2024-12-05 19:43:18.621840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.526 [2024-12-05 19:43:18.643567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.526 [2024-12-05 19:43:18.643609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.526 [2024-12-05 19:43:18.643619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.690 ms 00:24:51.527 [2024-12-05 19:43:18.643629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.527 [2024-12-05 19:43:18.643722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.527 [2024-12-05 19:43:18.643729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:51.527 [2024-12-05 19:43:18.643736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:51.527 [2024-12-05 19:43:18.643742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.527 [2024-12-05 19:43:18.681412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.527 [2024-12-05 19:43:18.681470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.527 [2024-12-05 19:43:18.681482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.611 ms 00:24:51.527 [2024-12-05 19:43:18.681489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.527 [2024-12-05 19:43:18.681543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.527 [2024-12-05 19:43:18.681551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.527 [2024-12-05 19:43:18.681561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:51.527 [2024-12-05 19:43:18.681567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.527 [2024-12-05 19:43:18.681929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.527 [2024-12-05 19:43:18.681950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.527 [2024-12-05 19:43:18.681959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:24:51.527 [2024-12-05 19:43:18.681965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.527 [2024-12-05 19:43:18.682070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.527 [2024-12-05 19:43:18.682077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.527 [2024-12-05 19:43:18.682084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:51.527 [2024-12-05 19:43:18.682095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.527 [2024-12-05 19:43:18.693013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.527 [2024-12-05 19:43:18.693043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.527 [2024-12-05 19:43:18.693054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.900 ms 00:24:51.527 [2024-12-05 19:43:18.693060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.527 [2024-12-05 19:43:18.703181] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:51.527 [2024-12-05 19:43:18.703218] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:51.527 [2024-12-05 19:43:18.703229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.527 [2024-12-05 19:43:18.703236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:51.527 [2024-12-05 19:43:18.703244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.063 ms 00:24:51.527 [2024-12-05 19:43:18.703250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.527 [2024-12-05 19:43:18.722423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.527 [2024-12-05 19:43:18.722606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:51.527 [2024-12-05 19:43:18.722622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.129 ms 00:24:51.527 [2024-12-05 19:43:18.722628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.527 [2024-12-05 19:43:18.732173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.527 [2024-12-05 19:43:18.732211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:51.527 [2024-12-05 19:43:18.732220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.479 ms 00:24:51.527 [2024-12-05 19:43:18.732226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.527 [2024-12-05 19:43:18.741462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.527 [2024-12-05 19:43:18.741597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:51.527 [2024-12-05 19:43:18.741611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.200 ms 00:24:51.527 [2024-12-05 19:43:18.741618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.527 [2024-12-05 19:43:18.742135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.527 [2024-12-05 19:43:18.742152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:51.527 [2024-12-05 19:43:18.742163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:24:51.527 [2024-12-05 19:43:18.742169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.785 [2024-12-05 19:43:18.787513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.785 [2024-12-05 19:43:18.787574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:51.785 [2024-12-05 19:43:18.787590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.328 ms 00:24:51.785 [2024-12-05 19:43:18.787597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.785 [2024-12-05 19:43:18.796079] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:51.785 [2024-12-05 19:43:18.798507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.785 [2024-12-05 19:43:18.798534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:51.785 [2024-12-05 19:43:18.798546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.853 ms 00:24:51.785 [2024-12-05 19:43:18.798553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.785 [2024-12-05 19:43:18.798634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.785 [2024-12-05 19:43:18.798643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:51.785 [2024-12-05 19:43:18.798652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:51.785 [2024-12-05 19:43:18.798658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.785 [2024-12-05 19:43:18.798753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.785 [2024-12-05 19:43:18.798762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:51.785 [2024-12-05 19:43:18.798769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:51.785 [2024-12-05 19:43:18.798776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.785 [2024-12-05 19:43:18.798794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.785 [2024-12-05 19:43:18.798801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:51.785 [2024-12-05 19:43:18.798807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:51.785 [2024-12-05 19:43:18.798813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.785 [2024-12-05 19:43:18.798839] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:51.785 [2024-12-05 19:43:18.798846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.785 [2024-12-05 19:43:18.798852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:51.785 [2024-12-05 19:43:18.798858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:51.785 [2024-12-05 19:43:18.798864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.785 [2024-12-05 19:43:18.817754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.785 [2024-12-05 19:43:18.817885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:51.785 [2024-12-05 19:43:18.817906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.872 ms 00:24:51.785 [2024-12-05 19:43:18.817913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.785 [2024-12-05 19:43:18.817972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.785 [2024-12-05 19:43:18.817980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:51.785 [2024-12-05 19:43:18.817987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:51.785 [2024-12-05 19:43:18.817993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.785 [2024-12-05 19:43:18.818831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 217.232 ms, result 0 00:24:52.716  [2024-12-05T19:43:20.906Z] Copying: 51/1024 [MB] (51 MBps) [2024-12-05T19:43:21.839Z] Copying: 97/1024 [MB] (46 MBps) [2024-12-05T19:43:22.875Z] Copying: 139/1024 [MB] (41 MBps) [2024-12-05T19:43:23.910Z] Copying: 182/1024 [MB] (43 MBps) [2024-12-05T19:43:24.840Z] Copying: 227/1024 [MB] (44 MBps) [2024-12-05T19:43:26.210Z] Copying: 271/1024 [MB] (44 MBps) [2024-12-05T19:43:27.144Z] Copying: 314/1024 [MB] (43 MBps) [2024-12-05T19:43:28.079Z] Copying: 358/1024 [MB] (43 MBps) [2024-12-05T19:43:29.033Z] Copying: 400/1024 [MB] (42 MBps) [2024-12-05T19:43:29.969Z] Copying: 443/1024 [MB] (43 MBps) [2024-12-05T19:43:30.902Z] Copying: 482/1024 [MB] (38 MBps) [2024-12-05T19:43:31.837Z] Copying: 525/1024 [MB] (42 MBps) [2024-12-05T19:43:33.213Z] Copying: 566/1024 [MB] (41 MBps) [2024-12-05T19:43:34.145Z] Copying: 609/1024 [MB] (42 MBps) [2024-12-05T19:43:35.077Z] Copying: 651/1024 [MB] (41 MBps) [2024-12-05T19:43:36.009Z] Copying: 694/1024 [MB] (42 MBps) [2024-12-05T19:43:36.943Z] Copying: 738/1024 [MB] (44 MBps) [2024-12-05T19:43:37.879Z] Copying: 787/1024 [MB] (48 MBps) [2024-12-05T19:43:39.253Z] Copying: 827/1024 [MB] (40 MBps) [2024-12-05T19:43:39.891Z] Copying: 866/1024 [MB] (38 MBps) [2024-12-05T19:43:41.264Z] Copying: 909/1024 [MB] (43 MBps) [2024-12-05T19:43:42.194Z] Copying: 952/1024 [MB] (42 MBps) [2024-12-05T19:43:43.124Z] Copying: 994/1024 [MB] (42 MBps) [2024-12-05T19:43:43.689Z] Copying: 1023/1024 [MB] (28 MBps) [2024-12-05T19:43:43.689Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-12-05 19:43:43.450812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.434 [2024-12-05 19:43:43.450880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:16.434 [2024-12-05 19:43:43.450903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:16.434 [2024-12-05 19:43:43.450911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.434 [2024-12-05 19:43:43.452338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:16.434 [2024-12-05 19:43:43.455917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.434 [2024-12-05 19:43:43.455959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:16.434 [2024-12-05 19:43:43.455972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.542 ms 00:25:16.434 [2024-12-05 19:43:43.455981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.434 [2024-12-05 19:43:43.468249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.434 [2024-12-05 19:43:43.468305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:16.434 [2024-12-05 19:43:43.468317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.203 ms 00:25:16.434 [2024-12-05 19:43:43.468333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.434 [2024-12-05 19:43:43.488941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.434 [2024-12-05 19:43:43.489011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:16.434 [2024-12-05 19:43:43.489026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.587 ms 00:25:16.434 [2024-12-05 19:43:43.489034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.434 [2024-12-05 19:43:43.497581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.434 [2024-12-05 19:43:43.497687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:16.434 [2024-12-05 19:43:43.497704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.502 ms 00:25:16.434 [2024-12-05 19:43:43.497719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.434 [2024-12-05 19:43:43.536007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.434 [2024-12-05 19:43:43.536098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:16.434 [2024-12-05 19:43:43.536117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.206 ms 00:25:16.434 [2024-12-05 19:43:43.536129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.434 [2024-12-05 19:43:43.556400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.434 [2024-12-05 19:43:43.556495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:16.434 [2024-12-05 19:43:43.556518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.172 ms 00:25:16.434 [2024-12-05 19:43:43.556532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.434 [2024-12-05 19:43:43.611294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.434 [2024-12-05 19:43:43.611583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:16.434 [2024-12-05 19:43:43.611619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.654 ms 00:25:16.434 [2024-12-05 19:43:43.611633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.434 [2024-12-05 19:43:43.652388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.434 [2024-12-05 19:43:43.652503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:16.434 [2024-12-05 19:43:43.652524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.715 ms 00:25:16.434 [2024-12-05 19:43:43.652538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.693 [2024-12-05 19:43:43.692141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.693 [2024-12-05 19:43:43.692237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:16.693 [2024-12-05 19:43:43.692260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.506 ms 00:25:16.693 [2024-12-05 19:43:43.692274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.693 [2024-12-05 19:43:43.718434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.693 [2024-12-05 19:43:43.718710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:16.693 [2024-12-05 19:43:43.718731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.054 ms 00:25:16.693 [2024-12-05 19:43:43.718739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.693 [2024-12-05 19:43:43.742667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.693 [2024-12-05 19:43:43.742732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:16.693 [2024-12-05 19:43:43.742745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.845 ms 00:25:16.693 [2024-12-05 19:43:43.742754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.693 [2024-12-05 19:43:43.742809] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:16.694 [2024-12-05 19:43:43.742825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 110336 / 261120 wr_cnt: 1 state: open 00:25:16.694 [2024-12-05 19:43:43.742836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.742993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:16.694 [2024-12-05 19:43:43.743485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:16.695 [2024-12-05 19:43:43.743592] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:16.695 [2024-12-05 19:43:43.743600] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f2a2c721-9529-45cb-beae-3973f9aeaf2f 00:25:16.695 [2024-12-05 19:43:43.743608] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 110336 00:25:16.695 [2024-12-05 19:43:43.743615] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 111296 00:25:16.695 [2024-12-05 19:43:43.743622] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 110336 00:25:16.695 [2024-12-05 19:43:43.743630] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0087 00:25:16.695 [2024-12-05 19:43:43.743649] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:16.695 [2024-12-05 19:43:43.743658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:16.695 [2024-12-05 19:43:43.743665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:16.695 [2024-12-05 19:43:43.743690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:16.695 [2024-12-05 19:43:43.743697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:16.695 [2024-12-05 19:43:43.743704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.695 [2024-12-05 19:43:43.743712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:16.695 [2024-12-05 19:43:43.743721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:25:16.695 [2024-12-05 19:43:43.743728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.756359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.695 [2024-12-05 19:43:43.756414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:16.695 [2024-12-05 19:43:43.756432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.583 ms 00:25:16.695 [2024-12-05 19:43:43.756440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.756841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.695 [2024-12-05 19:43:43.756852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:16.695 [2024-12-05 19:43:43.756861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:25:16.695 [2024-12-05 19:43:43.756868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.790020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.695 [2024-12-05 19:43:43.790259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:16.695 [2024-12-05 19:43:43.790277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.695 [2024-12-05 19:43:43.790285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.790356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.695 [2024-12-05 19:43:43.790365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:16.695 [2024-12-05 19:43:43.790373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.695 [2024-12-05 19:43:43.790380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.790448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.695 [2024-12-05 19:43:43.790462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:16.695 [2024-12-05 19:43:43.790470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.695 [2024-12-05 19:43:43.790482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.790497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.695 [2024-12-05 19:43:43.790506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:16.695 [2024-12-05 19:43:43.790514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.695 [2024-12-05 19:43:43.790521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.869362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.695 [2024-12-05 19:43:43.869593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:16.695 [2024-12-05 19:43:43.869612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.695 [2024-12-05 19:43:43.869620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.934310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.695 [2024-12-05 19:43:43.934365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:16.695 [2024-12-05 19:43:43.934376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.695 [2024-12-05 19:43:43.934384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.934453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.695 [2024-12-05 19:43:43.934462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:16.695 [2024-12-05 19:43:43.934470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.695 [2024-12-05 19:43:43.934482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.934516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.695 [2024-12-05 19:43:43.934525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:16.695 [2024-12-05 19:43:43.934532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.695 [2024-12-05 19:43:43.934540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.934627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.695 [2024-12-05 19:43:43.934636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:16.695 [2024-12-05 19:43:43.934644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.695 [2024-12-05 19:43:43.934655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.934711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.695 [2024-12-05 19:43:43.934721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:16.695 [2024-12-05 19:43:43.934729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.695 [2024-12-05 19:43:43.934737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.934770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.695 [2024-12-05 19:43:43.934778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:16.695 [2024-12-05 19:43:43.934786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.695 [2024-12-05 19:43:43.934793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.934836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.695 [2024-12-05 19:43:43.934845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:16.695 [2024-12-05 19:43:43.934852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.695 [2024-12-05 19:43:43.934860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.695 [2024-12-05 19:43:43.934976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 486.076 ms, result 0 00:25:19.225 00:25:19.225 00:25:19.225 19:43:46 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:19.225 [2024-12-05 19:43:46.161655] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:25:19.225 [2024-12-05 19:43:46.161806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78937 ] 00:25:19.225 [2024-12-05 19:43:46.323880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.225 [2024-12-05 19:43:46.429664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.484 [2024-12-05 19:43:46.690412] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:19.484 [2024-12-05 19:43:46.690487] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:19.742 [2024-12-05 19:43:46.843849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.742 [2024-12-05 19:43:46.844097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:19.742 [2024-12-05 19:43:46.844118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:19.742 [2024-12-05 19:43:46.844128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.742 [2024-12-05 19:43:46.844191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.742 [2024-12-05 19:43:46.844203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:19.742 [2024-12-05 19:43:46.844212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:19.742 [2024-12-05 19:43:46.844220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.742 [2024-12-05 19:43:46.844240] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:19.742 [2024-12-05 19:43:46.845030] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:19.742 [2024-12-05 19:43:46.845047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.742 [2024-12-05 19:43:46.845055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:19.742 [2024-12-05 19:43:46.845063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:25:19.742 [2024-12-05 19:43:46.845070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.742 [2024-12-05 19:43:46.846167] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:19.742 [2024-12-05 19:43:46.858848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.742 [2024-12-05 19:43:46.859058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:19.742 [2024-12-05 19:43:46.859076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.682 ms 00:25:19.742 [2024-12-05 19:43:46.859085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.742 [2024-12-05 19:43:46.859161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.742 [2024-12-05 19:43:46.859172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:19.742 [2024-12-05 19:43:46.859180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:19.742 [2024-12-05 19:43:46.859187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.742 [2024-12-05 19:43:46.864587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.742 [2024-12-05 19:43:46.864626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:19.742 [2024-12-05 19:43:46.864637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.318 ms 00:25:19.742 [2024-12-05 19:43:46.864649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.742 [2024-12-05 19:43:46.864748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.742 [2024-12-05 19:43:46.864758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:19.742 [2024-12-05 19:43:46.864767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:19.742 [2024-12-05 19:43:46.864775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.742 [2024-12-05 19:43:46.864831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.742 [2024-12-05 19:43:46.864840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:19.742 [2024-12-05 19:43:46.864848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:19.742 [2024-12-05 19:43:46.864856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.742 [2024-12-05 19:43:46.864881] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:19.742 [2024-12-05 19:43:46.868199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.742 [2024-12-05 19:43:46.868230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:19.742 [2024-12-05 19:43:46.868242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.323 ms 00:25:19.742 [2024-12-05 19:43:46.868250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.742 [2024-12-05 19:43:46.868285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.742 [2024-12-05 19:43:46.868293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:19.742 [2024-12-05 19:43:46.868301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:19.742 [2024-12-05 19:43:46.868308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.742 [2024-12-05 19:43:46.868329] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:19.742 [2024-12-05 19:43:46.868348] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:19.742 [2024-12-05 19:43:46.868383] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:19.742 [2024-12-05 19:43:46.868400] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:19.742 [2024-12-05 19:43:46.868503] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:19.742 [2024-12-05 19:43:46.868513] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:19.742 [2024-12-05 19:43:46.868523] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:19.742 [2024-12-05 19:43:46.868532] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:19.742 [2024-12-05 19:43:46.868541] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:19.742 [2024-12-05 19:43:46.868549] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:19.742 [2024-12-05 19:43:46.868557] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:19.742 [2024-12-05 19:43:46.868566] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:19.742 [2024-12-05 19:43:46.868574] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:19.742 [2024-12-05 19:43:46.868581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.742 [2024-12-05 19:43:46.868588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:19.742 [2024-12-05 19:43:46.868596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:25:19.742 [2024-12-05 19:43:46.868603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.743 [2024-12-05 19:43:46.868719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.743 [2024-12-05 19:43:46.868729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:19.743 [2024-12-05 19:43:46.868737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:25:19.743 [2024-12-05 19:43:46.868744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.743 [2024-12-05 19:43:46.868868] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:19.743 [2024-12-05 19:43:46.868879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:19.743 [2024-12-05 19:43:46.868887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:19.743 [2024-12-05 19:43:46.868895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.743 [2024-12-05 19:43:46.868903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:19.743 [2024-12-05 19:43:46.868909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:19.743 [2024-12-05 19:43:46.868916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:19.743 [2024-12-05 19:43:46.868924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:19.743 [2024-12-05 19:43:46.868931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:19.743 [2024-12-05 19:43:46.868937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:19.743 [2024-12-05 19:43:46.868944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:19.743 [2024-12-05 19:43:46.868951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:19.743 [2024-12-05 19:43:46.868958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:19.743 [2024-12-05 19:43:46.868971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:19.743 [2024-12-05 19:43:46.868977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:19.743 [2024-12-05 19:43:46.868983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.743 [2024-12-05 19:43:46.868990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:19.743 [2024-12-05 19:43:46.868996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:19.743 [2024-12-05 19:43:46.869004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.743 [2024-12-05 19:43:46.869011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:19.743 [2024-12-05 19:43:46.869018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:19.743 [2024-12-05 19:43:46.869024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:19.743 [2024-12-05 19:43:46.869031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:19.743 [2024-12-05 19:43:46.869038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:19.743 [2024-12-05 19:43:46.869044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:19.743 [2024-12-05 19:43:46.869050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:19.743 [2024-12-05 19:43:46.869057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:19.743 [2024-12-05 19:43:46.869063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:19.743 [2024-12-05 19:43:46.869070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:19.743 [2024-12-05 19:43:46.869076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:19.743 [2024-12-05 19:43:46.869083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:19.743 [2024-12-05 19:43:46.869089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:19.743 [2024-12-05 19:43:46.869096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:19.743 [2024-12-05 19:43:46.869102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:19.743 [2024-12-05 19:43:46.869109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:19.743 [2024-12-05 19:43:46.869115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:19.743 [2024-12-05 19:43:46.869121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:19.743 [2024-12-05 19:43:46.869127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:19.743 [2024-12-05 19:43:46.869134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:19.743 [2024-12-05 19:43:46.869140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.743 [2024-12-05 19:43:46.869146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:19.743 [2024-12-05 19:43:46.869152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:19.743 [2024-12-05 19:43:46.869160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.743 [2024-12-05 19:43:46.869166] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:19.743 [2024-12-05 19:43:46.869173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:19.743 [2024-12-05 19:43:46.869180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:19.743 [2024-12-05 19:43:46.869186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.743 [2024-12-05 19:43:46.869193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:19.743 [2024-12-05 19:43:46.869200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:19.743 [2024-12-05 19:43:46.869206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:19.743 [2024-12-05 19:43:46.869214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:19.743 [2024-12-05 19:43:46.869220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:19.743 [2024-12-05 19:43:46.869226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:19.743 [2024-12-05 19:43:46.869234] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:19.743 [2024-12-05 19:43:46.869243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:19.743 [2024-12-05 19:43:46.869253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:19.743 [2024-12-05 19:43:46.869261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:19.743 [2024-12-05 19:43:46.869268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:19.743 [2024-12-05 19:43:46.869274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:19.743 [2024-12-05 19:43:46.869281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:19.743 [2024-12-05 19:43:46.869288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:19.743 [2024-12-05 19:43:46.869295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:19.743 [2024-12-05 19:43:46.869302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:19.743 [2024-12-05 19:43:46.869309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:19.743 [2024-12-05 19:43:46.869316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:19.743 [2024-12-05 19:43:46.869322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:19.743 [2024-12-05 19:43:46.869329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:19.743 [2024-12-05 19:43:46.869336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:19.743 [2024-12-05 19:43:46.869343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:19.743 [2024-12-05 19:43:46.869350] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:19.743 [2024-12-05 19:43:46.869358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:19.743 [2024-12-05 19:43:46.869367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:19.744 [2024-12-05 19:43:46.869374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:19.744 [2024-12-05 19:43:46.869381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:19.744 [2024-12-05 19:43:46.869389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:19.744 [2024-12-05 19:43:46.869396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.744 [2024-12-05 19:43:46.869404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:19.744 [2024-12-05 19:43:46.869411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:25:19.744 [2024-12-05 19:43:46.869417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.744 [2024-12-05 19:43:46.895504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.744 [2024-12-05 19:43:46.895555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:19.744 [2024-12-05 19:43:46.895568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.032 ms 00:25:19.744 [2024-12-05 19:43:46.895579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.744 [2024-12-05 19:43:46.895690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.744 [2024-12-05 19:43:46.895700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:19.744 [2024-12-05 19:43:46.895708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:19.744 [2024-12-05 19:43:46.895715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.744 [2024-12-05 19:43:46.938397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.744 [2024-12-05 19:43:46.938453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:19.744 [2024-12-05 19:43:46.938468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.611 ms 00:25:19.744 [2024-12-05 19:43:46.938477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.744 [2024-12-05 19:43:46.938536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.744 [2024-12-05 19:43:46.938547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:19.744 [2024-12-05 19:43:46.938559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:19.744 [2024-12-05 19:43:46.938567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.744 [2024-12-05 19:43:46.938983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.744 [2024-12-05 19:43:46.939008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:19.744 [2024-12-05 19:43:46.939018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:25:19.744 [2024-12-05 19:43:46.939027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.744 [2024-12-05 19:43:46.939159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.744 [2024-12-05 19:43:46.939173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:19.744 [2024-12-05 19:43:46.939186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:19.744 [2024-12-05 19:43:46.939193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.744 [2024-12-05 19:43:46.952279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.744 [2024-12-05 19:43:46.952508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:19.744 [2024-12-05 19:43:46.952526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.065 ms 00:25:19.744 [2024-12-05 19:43:46.952534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.744 [2024-12-05 19:43:46.964964] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:19.744 [2024-12-05 19:43:46.965015] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:19.744 [2024-12-05 19:43:46.965028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.744 [2024-12-05 19:43:46.965037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:19.744 [2024-12-05 19:43:46.965047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.368 ms 00:25:19.744 [2024-12-05 19:43:46.965054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.744 [2024-12-05 19:43:46.989746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.744 [2024-12-05 19:43:46.989820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:19.744 [2024-12-05 19:43:46.989833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.633 ms 00:25:19.744 [2024-12-05 19:43:46.989840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.002 [2024-12-05 19:43:47.002268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.002 [2024-12-05 19:43:47.002318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:20.002 [2024-12-05 19:43:47.002330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.364 ms 00:25:20.002 [2024-12-05 19:43:47.002337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.002 [2024-12-05 19:43:47.014045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.002 [2024-12-05 19:43:47.014271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:20.002 [2024-12-05 19:43:47.014290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.641 ms 00:25:20.002 [2024-12-05 19:43:47.014299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.002 [2024-12-05 19:43:47.014975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.002 [2024-12-05 19:43:47.014996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:20.002 [2024-12-05 19:43:47.015009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:25:20.002 [2024-12-05 19:43:47.015016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.002 [2024-12-05 19:43:47.072601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.002 [2024-12-05 19:43:47.072663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:20.002 [2024-12-05 19:43:47.072725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.565 ms 00:25:20.002 [2024-12-05 19:43:47.072733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.002 [2024-12-05 19:43:47.083737] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:20.002 [2024-12-05 19:43:47.086715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.002 [2024-12-05 19:43:47.086755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:20.002 [2024-12-05 19:43:47.086768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.904 ms 00:25:20.002 [2024-12-05 19:43:47.086776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.002 [2024-12-05 19:43:47.086884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.002 [2024-12-05 19:43:47.086895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:20.002 [2024-12-05 19:43:47.086908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:20.002 [2024-12-05 19:43:47.086916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.002 [2024-12-05 19:43:47.088269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.002 [2024-12-05 19:43:47.088307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:20.002 [2024-12-05 19:43:47.088318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:25:20.002 [2024-12-05 19:43:47.088325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.002 [2024-12-05 19:43:47.088351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.002 [2024-12-05 19:43:47.088361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:20.002 [2024-12-05 19:43:47.088369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:20.002 [2024-12-05 19:43:47.088377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.002 [2024-12-05 19:43:47.088416] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:20.002 [2024-12-05 19:43:47.088426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.002 [2024-12-05 19:43:47.088434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:20.002 [2024-12-05 19:43:47.088442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:20.003 [2024-12-05 19:43:47.088450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.003 [2024-12-05 19:43:47.113152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.003 [2024-12-05 19:43:47.113214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:20.003 [2024-12-05 19:43:47.113232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.682 ms 00:25:20.003 [2024-12-05 19:43:47.113240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.003 [2024-12-05 19:43:47.113339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.003 [2024-12-05 19:43:47.113349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:20.003 [2024-12-05 19:43:47.113357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:20.003 [2024-12-05 19:43:47.113365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.003 [2024-12-05 19:43:47.114380] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.101 ms, result 0 00:25:21.375  [2024-12-05T19:43:49.605Z] Copying: 40/1024 [MB] (40 MBps) [2024-12-05T19:43:50.546Z] Copying: 86/1024 [MB] (46 MBps) [2024-12-05T19:43:51.478Z] Copying: 133/1024 [MB] (47 MBps) [2024-12-05T19:43:52.411Z] Copying: 181/1024 [MB] (47 MBps) [2024-12-05T19:43:53.345Z] Copying: 227/1024 [MB] (46 MBps) [2024-12-05T19:43:54.718Z] Copying: 274/1024 [MB] (46 MBps) [2024-12-05T19:43:55.652Z] Copying: 320/1024 [MB] (45 MBps) [2024-12-05T19:43:56.630Z] Copying: 362/1024 [MB] (42 MBps) [2024-12-05T19:43:57.563Z] Copying: 405/1024 [MB] (43 MBps) [2024-12-05T19:43:58.496Z] Copying: 452/1024 [MB] (46 MBps) [2024-12-05T19:43:59.429Z] Copying: 493/1024 [MB] (41 MBps) [2024-12-05T19:44:00.359Z] Copying: 540/1024 [MB] (46 MBps) [2024-12-05T19:44:01.732Z] Copying: 576/1024 [MB] (35 MBps) [2024-12-05T19:44:02.358Z] Copying: 612/1024 [MB] (36 MBps) [2024-12-05T19:44:03.730Z] Copying: 656/1024 [MB] (44 MBps) [2024-12-05T19:44:04.665Z] Copying: 699/1024 [MB] (43 MBps) [2024-12-05T19:44:05.598Z] Copying: 746/1024 [MB] (46 MBps) [2024-12-05T19:44:06.532Z] Copying: 793/1024 [MB] (46 MBps) [2024-12-05T19:44:07.470Z] Copying: 838/1024 [MB] (45 MBps) [2024-12-05T19:44:08.403Z] Copying: 863/1024 [MB] (25 MBps) [2024-12-05T19:44:09.335Z] Copying: 887/1024 [MB] (24 MBps) [2024-12-05T19:44:10.714Z] Copying: 934/1024 [MB] (46 MBps) [2024-12-05T19:44:11.652Z] Copying: 957/1024 [MB] (22 MBps) [2024-12-05T19:44:12.590Z] Copying: 970/1024 [MB] (13 MBps) [2024-12-05T19:44:13.527Z] Copying: 988/1024 [MB] (17 MBps) [2024-12-05T19:44:14.472Z] Copying: 1003/1024 [MB] (15 MBps) [2024-12-05T19:44:14.472Z] Copying: 1024/1024 [MB] (average 37 MBps)[2024-12-05 19:44:14.379844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.217 [2024-12-05 19:44:14.379899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:47.217 [2024-12-05 19:44:14.379922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:47.217 [2024-12-05 19:44:14.379931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.217 [2024-12-05 19:44:14.379953] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:47.217 [2024-12-05 19:44:14.382754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.217 [2024-12-05 19:44:14.382785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:47.217 [2024-12-05 19:44:14.382796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.786 ms 00:25:47.217 [2024-12-05 19:44:14.382804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.217 [2024-12-05 19:44:14.383023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.217 [2024-12-05 19:44:14.383031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:47.217 [2024-12-05 19:44:14.383040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:25:47.217 [2024-12-05 19:44:14.383052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.218 [2024-12-05 19:44:14.388510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.218 [2024-12-05 19:44:14.388542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:47.218 [2024-12-05 19:44:14.388553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.442 ms 00:25:47.218 [2024-12-05 19:44:14.388561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.218 [2024-12-05 19:44:14.395154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.218 [2024-12-05 19:44:14.395183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:47.218 [2024-12-05 19:44:14.395193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.560 ms 00:25:47.218 [2024-12-05 19:44:14.395206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.218 [2024-12-05 19:44:14.420198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.218 [2024-12-05 19:44:14.420239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:47.218 [2024-12-05 19:44:14.420251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.929 ms 00:25:47.218 [2024-12-05 19:44:14.420259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.218 [2024-12-05 19:44:14.433398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.218 [2024-12-05 19:44:14.433445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:47.218 [2024-12-05 19:44:14.433457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.102 ms 00:25:47.218 [2024-12-05 19:44:14.433464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.477 [2024-12-05 19:44:14.694762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.477 [2024-12-05 19:44:14.694821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:47.477 [2024-12-05 19:44:14.694834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 261.255 ms 00:25:47.477 [2024-12-05 19:44:14.694842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.477 [2024-12-05 19:44:14.719501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.477 [2024-12-05 19:44:14.719547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:47.477 [2024-12-05 19:44:14.719559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.643 ms 00:25:47.477 [2024-12-05 19:44:14.719568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.743 [2024-12-05 19:44:14.743240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.743 [2024-12-05 19:44:14.743281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:47.743 [2024-12-05 19:44:14.743293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.636 ms 00:25:47.743 [2024-12-05 19:44:14.743300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.743 [2024-12-05 19:44:14.766319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.743 [2024-12-05 19:44:14.766359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:47.743 [2024-12-05 19:44:14.766370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.979 ms 00:25:47.743 [2024-12-05 19:44:14.766378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.743 [2024-12-05 19:44:14.789237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.743 [2024-12-05 19:44:14.789414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:47.743 [2024-12-05 19:44:14.789431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.798 ms 00:25:47.743 [2024-12-05 19:44:14.789439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.743 [2024-12-05 19:44:14.789472] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:47.743 [2024-12-05 19:44:14.789488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:47.743 [2024-12-05 19:44:14.789499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.789998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.790005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.790012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.790020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.790027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.790034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.790041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.790049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:47.743 [2024-12-05 19:44:14.790056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:47.744 [2024-12-05 19:44:14.790264] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:47.744 [2024-12-05 19:44:14.790271] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f2a2c721-9529-45cb-beae-3973f9aeaf2f 00:25:47.744 [2024-12-05 19:44:14.790279] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:47.744 [2024-12-05 19:44:14.790286] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 21696 00:25:47.744 [2024-12-05 19:44:14.790293] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 20736 00:25:47.744 [2024-12-05 19:44:14.790302] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0463 00:25:47.744 [2024-12-05 19:44:14.790312] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:47.744 [2024-12-05 19:44:14.790325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:47.744 [2024-12-05 19:44:14.790333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:47.744 [2024-12-05 19:44:14.790339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:47.744 [2024-12-05 19:44:14.790346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:47.744 [2024-12-05 19:44:14.790353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.744 [2024-12-05 19:44:14.790361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:47.744 [2024-12-05 19:44:14.790369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:25:47.744 [2024-12-05 19:44:14.790377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.802859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.744 [2024-12-05 19:44:14.802893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:47.744 [2024-12-05 19:44:14.802909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.464 ms 00:25:47.744 [2024-12-05 19:44:14.802918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.803277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.744 [2024-12-05 19:44:14.803290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:47.744 [2024-12-05 19:44:14.803299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:25:47.744 [2024-12-05 19:44:14.803306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.835562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.744 [2024-12-05 19:44:14.835616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:47.744 [2024-12-05 19:44:14.835626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.744 [2024-12-05 19:44:14.835635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.835714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.744 [2024-12-05 19:44:14.835730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:47.744 [2024-12-05 19:44:14.835739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.744 [2024-12-05 19:44:14.835745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.835810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.744 [2024-12-05 19:44:14.835819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:47.744 [2024-12-05 19:44:14.835831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.744 [2024-12-05 19:44:14.835839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.835853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.744 [2024-12-05 19:44:14.835861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:47.744 [2024-12-05 19:44:14.835869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.744 [2024-12-05 19:44:14.835876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.912476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.744 [2024-12-05 19:44:14.912537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:47.744 [2024-12-05 19:44:14.912548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.744 [2024-12-05 19:44:14.912555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.975399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.744 [2024-12-05 19:44:14.975595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:47.744 [2024-12-05 19:44:14.975611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.744 [2024-12-05 19:44:14.975619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.975707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.744 [2024-12-05 19:44:14.975717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:47.744 [2024-12-05 19:44:14.975726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.744 [2024-12-05 19:44:14.975742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.975775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.744 [2024-12-05 19:44:14.975784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:47.744 [2024-12-05 19:44:14.975791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.744 [2024-12-05 19:44:14.975798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.975888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.744 [2024-12-05 19:44:14.975898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:47.744 [2024-12-05 19:44:14.975906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.744 [2024-12-05 19:44:14.975913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.975945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.744 [2024-12-05 19:44:14.975954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:47.744 [2024-12-05 19:44:14.975962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.744 [2024-12-05 19:44:14.975970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.976001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.744 [2024-12-05 19:44:14.976010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:47.744 [2024-12-05 19:44:14.976018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.744 [2024-12-05 19:44:14.976025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.976066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.744 [2024-12-05 19:44:14.976075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:47.744 [2024-12-05 19:44:14.976083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.744 [2024-12-05 19:44:14.976091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.744 [2024-12-05 19:44:14.976199] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 596.330 ms, result 0 00:25:48.678 00:25:48.678 00:25:48.678 19:44:15 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:51.228 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:51.228 19:44:17 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:51.228 19:44:17 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:51.228 19:44:17 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:51.228 19:44:17 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:51.228 19:44:17 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:51.228 Process with pid 77875 is not found 00:25:51.228 Remove shared memory files 00:25:51.228 19:44:17 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77875 00:25:51.228 19:44:17 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77875 ']' 00:25:51.228 19:44:17 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77875 00:25:51.228 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77875) - No such process 00:25:51.228 19:44:17 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77875 is not found' 00:25:51.228 19:44:17 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:51.228 19:44:17 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:51.228 19:44:17 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:51.228 19:44:17 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:51.228 19:44:18 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:51.228 19:44:18 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:51.228 19:44:18 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:51.228 ************************************ 00:25:51.228 END TEST ftl_restore 00:25:51.228 ************************************ 00:25:51.228 00:25:51.228 real 2m19.578s 00:25:51.228 user 2m8.892s 00:25:51.228 sys 0m11.771s 00:25:51.228 19:44:18 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:51.228 19:44:18 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:51.228 19:44:18 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:51.228 19:44:18 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:51.228 19:44:18 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:51.228 19:44:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:51.228 ************************************ 00:25:51.228 START TEST ftl_dirty_shutdown 00:25:51.228 ************************************ 00:25:51.228 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:51.228 * Looking for test storage... 00:25:51.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:51.228 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:51.228 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:51.228 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:51.228 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:51.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.229 --rc genhtml_branch_coverage=1 00:25:51.229 --rc genhtml_function_coverage=1 00:25:51.229 --rc genhtml_legend=1 00:25:51.229 --rc geninfo_all_blocks=1 00:25:51.229 --rc geninfo_unexecuted_blocks=1 00:25:51.229 00:25:51.229 ' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:51.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.229 --rc genhtml_branch_coverage=1 00:25:51.229 --rc genhtml_function_coverage=1 00:25:51.229 --rc genhtml_legend=1 00:25:51.229 --rc geninfo_all_blocks=1 00:25:51.229 --rc geninfo_unexecuted_blocks=1 00:25:51.229 00:25:51.229 ' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:51.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.229 --rc genhtml_branch_coverage=1 00:25:51.229 --rc genhtml_function_coverage=1 00:25:51.229 --rc genhtml_legend=1 00:25:51.229 --rc geninfo_all_blocks=1 00:25:51.229 --rc geninfo_unexecuted_blocks=1 00:25:51.229 00:25:51.229 ' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:51.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.229 --rc genhtml_branch_coverage=1 00:25:51.229 --rc genhtml_function_coverage=1 00:25:51.229 --rc genhtml_legend=1 00:25:51.229 --rc geninfo_all_blocks=1 00:25:51.229 --rc geninfo_unexecuted_blocks=1 00:25:51.229 00:25:51.229 ' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79366 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79366 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79366 ']' 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.229 19:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:51.229 [2024-12-05 19:44:18.297549] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:25:51.229 [2024-12-05 19:44:18.297834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79366 ] 00:25:51.229 [2024-12-05 19:44:18.455610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.487 [2024-12-05 19:44:18.557286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.054 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.054 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:52.054 19:44:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:52.054 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:52.054 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:52.054 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:52.054 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:52.054 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:52.315 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:52.315 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:52.315 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:52.315 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:52.315 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:52.315 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:52.315 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:52.315 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:52.577 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:52.577 { 00:25:52.577 "name": "nvme0n1", 00:25:52.577 "aliases": [ 00:25:52.577 "6eeea318-73b6-479f-a9c8-c13c5e55d590" 00:25:52.577 ], 00:25:52.577 "product_name": "NVMe disk", 00:25:52.577 "block_size": 4096, 00:25:52.577 "num_blocks": 1310720, 00:25:52.577 "uuid": "6eeea318-73b6-479f-a9c8-c13c5e55d590", 00:25:52.577 "numa_id": -1, 00:25:52.577 "assigned_rate_limits": { 00:25:52.577 "rw_ios_per_sec": 0, 00:25:52.577 "rw_mbytes_per_sec": 0, 00:25:52.577 "r_mbytes_per_sec": 0, 00:25:52.577 "w_mbytes_per_sec": 0 00:25:52.577 }, 00:25:52.577 "claimed": true, 00:25:52.577 "claim_type": "read_many_write_one", 00:25:52.577 "zoned": false, 00:25:52.577 "supported_io_types": { 00:25:52.577 "read": true, 00:25:52.577 "write": true, 00:25:52.577 "unmap": true, 00:25:52.577 "flush": true, 00:25:52.577 "reset": true, 00:25:52.577 "nvme_admin": true, 00:25:52.577 "nvme_io": true, 00:25:52.577 "nvme_io_md": false, 00:25:52.577 "write_zeroes": true, 00:25:52.577 "zcopy": false, 00:25:52.577 "get_zone_info": false, 00:25:52.577 "zone_management": false, 00:25:52.577 "zone_append": false, 00:25:52.577 "compare": true, 00:25:52.577 "compare_and_write": false, 00:25:52.577 "abort": true, 00:25:52.577 "seek_hole": false, 00:25:52.577 "seek_data": false, 00:25:52.577 "copy": true, 00:25:52.577 "nvme_iov_md": false 00:25:52.577 }, 00:25:52.577 "driver_specific": { 00:25:52.577 "nvme": [ 00:25:52.577 { 00:25:52.577 "pci_address": "0000:00:11.0", 00:25:52.577 "trid": { 00:25:52.577 "trtype": "PCIe", 00:25:52.577 "traddr": "0000:00:11.0" 00:25:52.577 }, 00:25:52.577 "ctrlr_data": { 00:25:52.577 "cntlid": 0, 00:25:52.577 "vendor_id": "0x1b36", 00:25:52.577 "model_number": "QEMU NVMe Ctrl", 00:25:52.577 "serial_number": "12341", 00:25:52.577 "firmware_revision": "8.0.0", 00:25:52.577 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:52.577 "oacs": { 00:25:52.577 "security": 0, 00:25:52.577 "format": 1, 00:25:52.577 "firmware": 0, 00:25:52.577 "ns_manage": 1 00:25:52.577 }, 00:25:52.577 "multi_ctrlr": false, 00:25:52.577 "ana_reporting": false 00:25:52.577 }, 00:25:52.577 "vs": { 00:25:52.577 "nvme_version": "1.4" 00:25:52.577 }, 00:25:52.577 "ns_data": { 00:25:52.577 "id": 1, 00:25:52.577 "can_share": false 00:25:52.577 } 00:25:52.577 } 00:25:52.577 ], 00:25:52.577 "mp_policy": "active_passive" 00:25:52.577 } 00:25:52.577 } 00:25:52.577 ]' 00:25:52.577 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:52.577 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:52.577 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:52.577 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:52.577 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:52.577 19:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:52.577 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:52.577 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:52.577 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:52.577 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:52.577 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:52.838 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=5ccacc13-b3d3-43f4-94c1-de7de40bacfd 00:25:52.838 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:52.838 19:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ccacc13-b3d3-43f4-94c1-de7de40bacfd 00:25:53.098 19:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:53.358 19:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=8c5ee5d8-6b07-4c7d-8d36-b819cbb8847c 00:25:53.358 19:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8c5ee5d8-6b07-4c7d-8d36-b819cbb8847c 00:25:53.358 19:44:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=0f6c7715-bc22-426e-bb1b-af9607a8293d 00:25:53.358 19:44:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:53.358 19:44:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0f6c7715-bc22-426e-bb1b-af9607a8293d 00:25:53.358 19:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:53.358 19:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:53.358 19:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=0f6c7715-bc22-426e-bb1b-af9607a8293d 00:25:53.359 19:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:53.359 19:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 0f6c7715-bc22-426e-bb1b-af9607a8293d 00:25:53.359 19:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=0f6c7715-bc22-426e-bb1b-af9607a8293d 00:25:53.359 19:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:53.359 19:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:53.359 19:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:53.359 19:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f6c7715-bc22-426e-bb1b-af9607a8293d 00:25:53.618 19:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:53.618 { 00:25:53.618 "name": "0f6c7715-bc22-426e-bb1b-af9607a8293d", 00:25:53.618 "aliases": [ 00:25:53.618 "lvs/nvme0n1p0" 00:25:53.618 ], 00:25:53.618 "product_name": "Logical Volume", 00:25:53.618 "block_size": 4096, 00:25:53.618 "num_blocks": 26476544, 00:25:53.618 "uuid": "0f6c7715-bc22-426e-bb1b-af9607a8293d", 00:25:53.618 "assigned_rate_limits": { 00:25:53.618 "rw_ios_per_sec": 0, 00:25:53.618 "rw_mbytes_per_sec": 0, 00:25:53.618 "r_mbytes_per_sec": 0, 00:25:53.618 "w_mbytes_per_sec": 0 00:25:53.618 }, 00:25:53.618 "claimed": false, 00:25:53.618 "zoned": false, 00:25:53.618 "supported_io_types": { 00:25:53.618 "read": true, 00:25:53.618 "write": true, 00:25:53.618 "unmap": true, 00:25:53.618 "flush": false, 00:25:53.618 "reset": true, 00:25:53.618 "nvme_admin": false, 00:25:53.618 "nvme_io": false, 00:25:53.618 "nvme_io_md": false, 00:25:53.618 "write_zeroes": true, 00:25:53.618 "zcopy": false, 00:25:53.618 "get_zone_info": false, 00:25:53.618 "zone_management": false, 00:25:53.618 "zone_append": false, 00:25:53.618 "compare": false, 00:25:53.618 "compare_and_write": false, 00:25:53.618 "abort": false, 00:25:53.618 "seek_hole": true, 00:25:53.618 "seek_data": true, 00:25:53.618 "copy": false, 00:25:53.618 "nvme_iov_md": false 00:25:53.618 }, 00:25:53.618 "driver_specific": { 00:25:53.618 "lvol": { 00:25:53.618 "lvol_store_uuid": "8c5ee5d8-6b07-4c7d-8d36-b819cbb8847c", 00:25:53.619 "base_bdev": "nvme0n1", 00:25:53.619 "thin_provision": true, 00:25:53.619 "num_allocated_clusters": 0, 00:25:53.619 "snapshot": false, 00:25:53.619 "clone": false, 00:25:53.619 "esnap_clone": false 00:25:53.619 } 00:25:53.619 } 00:25:53.619 } 00:25:53.619 ]' 00:25:53.619 19:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:53.619 19:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:53.619 19:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:53.879 19:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:53.879 19:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:53.879 19:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:53.879 19:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:53.879 19:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:53.879 19:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:53.879 19:44:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:53.879 19:44:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:54.141 19:44:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 0f6c7715-bc22-426e-bb1b-af9607a8293d 00:25:54.141 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=0f6c7715-bc22-426e-bb1b-af9607a8293d 00:25:54.141 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:54.141 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:54.141 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:54.141 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f6c7715-bc22-426e-bb1b-af9607a8293d 00:25:54.141 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:54.141 { 00:25:54.141 "name": "0f6c7715-bc22-426e-bb1b-af9607a8293d", 00:25:54.141 "aliases": [ 00:25:54.141 "lvs/nvme0n1p0" 00:25:54.141 ], 00:25:54.141 "product_name": "Logical Volume", 00:25:54.141 "block_size": 4096, 00:25:54.141 "num_blocks": 26476544, 00:25:54.141 "uuid": "0f6c7715-bc22-426e-bb1b-af9607a8293d", 00:25:54.141 "assigned_rate_limits": { 00:25:54.141 "rw_ios_per_sec": 0, 00:25:54.141 "rw_mbytes_per_sec": 0, 00:25:54.141 "r_mbytes_per_sec": 0, 00:25:54.141 "w_mbytes_per_sec": 0 00:25:54.141 }, 00:25:54.141 "claimed": false, 00:25:54.141 "zoned": false, 00:25:54.141 "supported_io_types": { 00:25:54.141 "read": true, 00:25:54.141 "write": true, 00:25:54.141 "unmap": true, 00:25:54.141 "flush": false, 00:25:54.141 "reset": true, 00:25:54.141 "nvme_admin": false, 00:25:54.141 "nvme_io": false, 00:25:54.141 "nvme_io_md": false, 00:25:54.141 "write_zeroes": true, 00:25:54.141 "zcopy": false, 00:25:54.141 "get_zone_info": false, 00:25:54.141 "zone_management": false, 00:25:54.141 "zone_append": false, 00:25:54.141 "compare": false, 00:25:54.141 "compare_and_write": false, 00:25:54.141 "abort": false, 00:25:54.141 "seek_hole": true, 00:25:54.141 "seek_data": true, 00:25:54.141 "copy": false, 00:25:54.141 "nvme_iov_md": false 00:25:54.141 }, 00:25:54.141 "driver_specific": { 00:25:54.141 "lvol": { 00:25:54.141 "lvol_store_uuid": "8c5ee5d8-6b07-4c7d-8d36-b819cbb8847c", 00:25:54.141 "base_bdev": "nvme0n1", 00:25:54.141 "thin_provision": true, 00:25:54.141 "num_allocated_clusters": 0, 00:25:54.141 "snapshot": false, 00:25:54.141 "clone": false, 00:25:54.141 "esnap_clone": false 00:25:54.141 } 00:25:54.141 } 00:25:54.141 } 00:25:54.141 ]' 00:25:54.141 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:54.141 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:54.141 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:54.402 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:54.402 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:54.402 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:54.402 19:44:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:54.402 19:44:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:54.402 19:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:54.402 19:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 0f6c7715-bc22-426e-bb1b-af9607a8293d 00:25:54.402 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=0f6c7715-bc22-426e-bb1b-af9607a8293d 00:25:54.402 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:54.403 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:54.403 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:54.403 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f6c7715-bc22-426e-bb1b-af9607a8293d 00:25:54.664 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:54.664 { 00:25:54.664 "name": "0f6c7715-bc22-426e-bb1b-af9607a8293d", 00:25:54.664 "aliases": [ 00:25:54.664 "lvs/nvme0n1p0" 00:25:54.664 ], 00:25:54.664 "product_name": "Logical Volume", 00:25:54.664 "block_size": 4096, 00:25:54.664 "num_blocks": 26476544, 00:25:54.664 "uuid": "0f6c7715-bc22-426e-bb1b-af9607a8293d", 00:25:54.664 "assigned_rate_limits": { 00:25:54.664 "rw_ios_per_sec": 0, 00:25:54.664 "rw_mbytes_per_sec": 0, 00:25:54.664 "r_mbytes_per_sec": 0, 00:25:54.664 "w_mbytes_per_sec": 0 00:25:54.664 }, 00:25:54.664 "claimed": false, 00:25:54.664 "zoned": false, 00:25:54.664 "supported_io_types": { 00:25:54.664 "read": true, 00:25:54.664 "write": true, 00:25:54.664 "unmap": true, 00:25:54.664 "flush": false, 00:25:54.664 "reset": true, 00:25:54.664 "nvme_admin": false, 00:25:54.664 "nvme_io": false, 00:25:54.664 "nvme_io_md": false, 00:25:54.664 "write_zeroes": true, 00:25:54.664 "zcopy": false, 00:25:54.664 "get_zone_info": false, 00:25:54.664 "zone_management": false, 00:25:54.664 "zone_append": false, 00:25:54.664 "compare": false, 00:25:54.664 "compare_and_write": false, 00:25:54.664 "abort": false, 00:25:54.664 "seek_hole": true, 00:25:54.664 "seek_data": true, 00:25:54.664 "copy": false, 00:25:54.664 "nvme_iov_md": false 00:25:54.664 }, 00:25:54.664 "driver_specific": { 00:25:54.664 "lvol": { 00:25:54.664 "lvol_store_uuid": "8c5ee5d8-6b07-4c7d-8d36-b819cbb8847c", 00:25:54.664 "base_bdev": "nvme0n1", 00:25:54.664 "thin_provision": true, 00:25:54.664 "num_allocated_clusters": 0, 00:25:54.664 "snapshot": false, 00:25:54.664 "clone": false, 00:25:54.664 "esnap_clone": false 00:25:54.664 } 00:25:54.664 } 00:25:54.664 } 00:25:54.664 ]' 00:25:54.664 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:54.664 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:54.664 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:54.926 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:54.926 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:54.926 19:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:54.926 19:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:54.926 19:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0f6c7715-bc22-426e-bb1b-af9607a8293d --l2p_dram_limit 10' 00:25:54.926 19:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:54.926 19:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:54.926 19:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:54.926 19:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0f6c7715-bc22-426e-bb1b-af9607a8293d --l2p_dram_limit 10 -c nvc0n1p0 00:25:54.926 [2024-12-05 19:44:22.133214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.926 [2024-12-05 19:44:22.133464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:54.926 [2024-12-05 19:44:22.133498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:54.927 [2024-12-05 19:44:22.133510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.927 [2024-12-05 19:44:22.133601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.927 [2024-12-05 19:44:22.133614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:54.927 [2024-12-05 19:44:22.133627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:54.927 [2024-12-05 19:44:22.133637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.927 [2024-12-05 19:44:22.133668] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:54.927 [2024-12-05 19:44:22.134473] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:54.927 [2024-12-05 19:44:22.134514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.927 [2024-12-05 19:44:22.134524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:54.927 [2024-12-05 19:44:22.134538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:25:54.927 [2024-12-05 19:44:22.134547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.927 [2024-12-05 19:44:22.134633] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ef06c99b-746b-450b-a28e-d2a7acff1c63 00:25:54.927 [2024-12-05 19:44:22.136392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.927 [2024-12-05 19:44:22.136449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:54.927 [2024-12-05 19:44:22.136463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:54.927 [2024-12-05 19:44:22.136474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.927 [2024-12-05 19:44:22.145905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.927 [2024-12-05 19:44:22.146084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:54.927 [2024-12-05 19:44:22.146102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.359 ms 00:25:54.927 [2024-12-05 19:44:22.146113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.927 [2024-12-05 19:44:22.146227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.927 [2024-12-05 19:44:22.146239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:54.927 [2024-12-05 19:44:22.146249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:25:54.927 [2024-12-05 19:44:22.146262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.927 [2024-12-05 19:44:22.146312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.927 [2024-12-05 19:44:22.146324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:54.927 [2024-12-05 19:44:22.146335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:54.927 [2024-12-05 19:44:22.146345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.927 [2024-12-05 19:44:22.146370] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:54.927 [2024-12-05 19:44:22.150789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.927 [2024-12-05 19:44:22.150832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:54.927 [2024-12-05 19:44:22.150848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.422 ms 00:25:54.927 [2024-12-05 19:44:22.150856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.927 [2024-12-05 19:44:22.150921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.927 [2024-12-05 19:44:22.150932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:54.927 [2024-12-05 19:44:22.150943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:54.927 [2024-12-05 19:44:22.150951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.927 [2024-12-05 19:44:22.150991] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:54.927 [2024-12-05 19:44:22.151146] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:54.927 [2024-12-05 19:44:22.151165] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:54.927 [2024-12-05 19:44:22.151177] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:54.927 [2024-12-05 19:44:22.151190] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:54.927 [2024-12-05 19:44:22.151200] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:54.927 [2024-12-05 19:44:22.151210] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:54.927 [2024-12-05 19:44:22.151221] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:54.927 [2024-12-05 19:44:22.151232] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:54.927 [2024-12-05 19:44:22.151240] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:54.927 [2024-12-05 19:44:22.151251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.927 [2024-12-05 19:44:22.151267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:54.927 [2024-12-05 19:44:22.151278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:25:54.927 [2024-12-05 19:44:22.151286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.927 [2024-12-05 19:44:22.151375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.927 [2024-12-05 19:44:22.151389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:54.927 [2024-12-05 19:44:22.151400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:54.927 [2024-12-05 19:44:22.151410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.927 [2024-12-05 19:44:22.151515] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:54.927 [2024-12-05 19:44:22.151525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:54.927 [2024-12-05 19:44:22.151536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:54.927 [2024-12-05 19:44:22.151544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:54.927 [2024-12-05 19:44:22.151562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:54.927 [2024-12-05 19:44:22.151578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:54.927 [2024-12-05 19:44:22.151587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:54.927 [2024-12-05 19:44:22.151605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:54.927 [2024-12-05 19:44:22.151612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:54.927 [2024-12-05 19:44:22.151621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:54.927 [2024-12-05 19:44:22.151628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:54.927 [2024-12-05 19:44:22.151638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:54.927 [2024-12-05 19:44:22.151645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:54.927 [2024-12-05 19:44:22.151662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:54.927 [2024-12-05 19:44:22.151703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:54.927 [2024-12-05 19:44:22.151720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.927 [2024-12-05 19:44:22.151736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:54.927 [2024-12-05 19:44:22.151743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.927 [2024-12-05 19:44:22.151778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:54.927 [2024-12-05 19:44:22.151789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.927 [2024-12-05 19:44:22.151806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:54.927 [2024-12-05 19:44:22.151813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.927 [2024-12-05 19:44:22.151829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:54.927 [2024-12-05 19:44:22.151841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:54.927 [2024-12-05 19:44:22.151857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:54.927 [2024-12-05 19:44:22.151864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:54.927 [2024-12-05 19:44:22.151875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:54.927 [2024-12-05 19:44:22.151882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:54.927 [2024-12-05 19:44:22.151891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:54.927 [2024-12-05 19:44:22.151901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:54.927 [2024-12-05 19:44:22.151918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:54.927 [2024-12-05 19:44:22.151927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151933] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:54.927 [2024-12-05 19:44:22.151943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:54.927 [2024-12-05 19:44:22.151950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:54.927 [2024-12-05 19:44:22.151960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.927 [2024-12-05 19:44:22.151968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:54.927 [2024-12-05 19:44:22.151979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:54.927 [2024-12-05 19:44:22.151986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:54.927 [2024-12-05 19:44:22.151994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:54.928 [2024-12-05 19:44:22.152000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:54.928 [2024-12-05 19:44:22.152009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:54.928 [2024-12-05 19:44:22.152017] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:54.928 [2024-12-05 19:44:22.152032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:54.928 [2024-12-05 19:44:22.152040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:54.928 [2024-12-05 19:44:22.152050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:54.928 [2024-12-05 19:44:22.152057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:54.928 [2024-12-05 19:44:22.152069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:54.928 [2024-12-05 19:44:22.152077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:54.928 [2024-12-05 19:44:22.152088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:54.928 [2024-12-05 19:44:22.152096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:54.928 [2024-12-05 19:44:22.152106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:54.928 [2024-12-05 19:44:22.152114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:54.928 [2024-12-05 19:44:22.152125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:54.928 [2024-12-05 19:44:22.152133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:54.928 [2024-12-05 19:44:22.152143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:54.928 [2024-12-05 19:44:22.152151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:54.928 [2024-12-05 19:44:22.152161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:54.928 [2024-12-05 19:44:22.152169] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:54.928 [2024-12-05 19:44:22.152179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:54.928 [2024-12-05 19:44:22.152187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:54.928 [2024-12-05 19:44:22.152196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:54.928 [2024-12-05 19:44:22.152204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:54.928 [2024-12-05 19:44:22.152215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:54.928 [2024-12-05 19:44:22.152222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.928 [2024-12-05 19:44:22.152232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:54.928 [2024-12-05 19:44:22.152240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:25:54.928 [2024-12-05 19:44:22.152249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.928 [2024-12-05 19:44:22.152289] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:54.928 [2024-12-05 19:44:22.152303] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:00.219 [2024-12-05 19:44:27.181412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.219 [2024-12-05 19:44:27.181506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:00.219 [2024-12-05 19:44:27.181526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5029.102 ms 00:26:00.219 [2024-12-05 19:44:27.181538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.219 [2024-12-05 19:44:27.213993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.219 [2024-12-05 19:44:27.214071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:00.219 [2024-12-05 19:44:27.214086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.188 ms 00:26:00.219 [2024-12-05 19:44:27.214097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.219 [2024-12-05 19:44:27.214261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.219 [2024-12-05 19:44:27.214275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:00.219 [2024-12-05 19:44:27.214289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:26:00.219 [2024-12-05 19:44:27.214302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.219 [2024-12-05 19:44:27.250576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.219 [2024-12-05 19:44:27.250636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:00.219 [2024-12-05 19:44:27.250650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.217 ms 00:26:00.219 [2024-12-05 19:44:27.250660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.219 [2024-12-05 19:44:27.250722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.219 [2024-12-05 19:44:27.250733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:00.219 [2024-12-05 19:44:27.250743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:00.219 [2024-12-05 19:44:27.250761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.219 [2024-12-05 19:44:27.251362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.219 [2024-12-05 19:44:27.251400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:00.219 [2024-12-05 19:44:27.251410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:26:00.219 [2024-12-05 19:44:27.251421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.219 [2024-12-05 19:44:27.251557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.219 [2024-12-05 19:44:27.251572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:00.219 [2024-12-05 19:44:27.251581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:26:00.219 [2024-12-05 19:44:27.251593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.219 [2024-12-05 19:44:27.269149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.219 [2024-12-05 19:44:27.269372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:00.219 [2024-12-05 19:44:27.269393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.534 ms 00:26:00.219 [2024-12-05 19:44:27.269404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.219 [2024-12-05 19:44:27.304218] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:00.219 [2024-12-05 19:44:27.308315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.219 [2024-12-05 19:44:27.308530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:00.219 [2024-12-05 19:44:27.308559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.800 ms 00:26:00.219 [2024-12-05 19:44:27.308571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.219 [2024-12-05 19:44:27.410233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.219 [2024-12-05 19:44:27.410307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:00.219 [2024-12-05 19:44:27.410328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.599 ms 00:26:00.219 [2024-12-05 19:44:27.410337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.219 [2024-12-05 19:44:27.410564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.219 [2024-12-05 19:44:27.410576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:00.219 [2024-12-05 19:44:27.410592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:26:00.219 [2024-12-05 19:44:27.410600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.219 [2024-12-05 19:44:27.437574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.219 [2024-12-05 19:44:27.437635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:00.219 [2024-12-05 19:44:27.437652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.907 ms 00:26:00.219 [2024-12-05 19:44:27.437664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.219 [2024-12-05 19:44:27.462937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.220 [2024-12-05 19:44:27.462990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:00.220 [2024-12-05 19:44:27.463008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.091 ms 00:26:00.220 [2024-12-05 19:44:27.463016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.220 [2024-12-05 19:44:27.463639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.220 [2024-12-05 19:44:27.463658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:00.220 [2024-12-05 19:44:27.463689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:26:00.220 [2024-12-05 19:44:27.463698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.481 [2024-12-05 19:44:27.552337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.481 [2024-12-05 19:44:27.552407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:00.481 [2024-12-05 19:44:27.552431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.587 ms 00:26:00.481 [2024-12-05 19:44:27.552442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.481 [2024-12-05 19:44:27.581597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.481 [2024-12-05 19:44:27.581665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:00.481 [2024-12-05 19:44:27.581792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.038 ms 00:26:00.481 [2024-12-05 19:44:27.581802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.481 [2024-12-05 19:44:27.608353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.481 [2024-12-05 19:44:27.608415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:00.481 [2024-12-05 19:44:27.608432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.495 ms 00:26:00.481 [2024-12-05 19:44:27.608440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.481 [2024-12-05 19:44:27.635754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.481 [2024-12-05 19:44:27.635814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:00.481 [2024-12-05 19:44:27.635830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.251 ms 00:26:00.481 [2024-12-05 19:44:27.635838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.481 [2024-12-05 19:44:27.635901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.481 [2024-12-05 19:44:27.635911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:00.481 [2024-12-05 19:44:27.635927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:00.482 [2024-12-05 19:44:27.635935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.482 [2024-12-05 19:44:27.636048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.482 [2024-12-05 19:44:27.636062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:00.482 [2024-12-05 19:44:27.636073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:00.482 [2024-12-05 19:44:27.636081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.482 [2024-12-05 19:44:27.637323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5503.573 ms, result 0 00:26:00.482 { 00:26:00.482 "name": "ftl0", 00:26:00.482 "uuid": "ef06c99b-746b-450b-a28e-d2a7acff1c63" 00:26:00.482 } 00:26:00.482 19:44:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:00.482 19:44:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:00.743 19:44:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:00.743 19:44:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:00.743 19:44:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:01.005 /dev/nbd0 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:01.005 1+0 records in 00:26:01.005 1+0 records out 00:26:01.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389057 s, 10.5 MB/s 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:01.005 19:44:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:01.005 [2024-12-05 19:44:28.216538] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:26:01.005 [2024-12-05 19:44:28.217196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79519 ] 00:26:01.266 [2024-12-05 19:44:28.380019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.266 [2024-12-05 19:44:28.511096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.654  [2024-12-05T19:44:30.853Z] Copying: 189/1024 [MB] (189 MBps) [2024-12-05T19:44:31.796Z] Copying: 379/1024 [MB] (189 MBps) [2024-12-05T19:44:33.246Z] Copying: 569/1024 [MB] (189 MBps) [2024-12-05T19:44:33.821Z] Copying: 759/1024 [MB] (189 MBps) [2024-12-05T19:44:34.392Z] Copying: 949/1024 [MB] (190 MBps) [2024-12-05T19:44:35.332Z] Copying: 1024/1024 [MB] (average 189 MBps) 00:26:08.077 00:26:08.077 19:44:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:09.973 19:44:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:10.231 [2024-12-05 19:44:37.228594] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:26:10.231 [2024-12-05 19:44:37.228730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79618 ] 00:26:10.231 [2024-12-05 19:44:37.387813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.487 [2024-12-05 19:44:37.488084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.462  [2024-12-05T19:44:40.101Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-05T19:44:41.034Z] Copying: 26/1024 [MB] (13 MBps) [2024-12-05T19:44:41.969Z] Copying: 55/1024 [MB] (29 MBps) [2024-12-05T19:44:42.908Z] Copying: 85/1024 [MB] (29 MBps) [2024-12-05T19:44:43.849Z] Copying: 115/1024 [MB] (29 MBps) [2024-12-05T19:44:44.824Z] Copying: 145/1024 [MB] (29 MBps) [2024-12-05T19:44:45.776Z] Copying: 175/1024 [MB] (30 MBps) [2024-12-05T19:44:46.715Z] Copying: 205/1024 [MB] (29 MBps) [2024-12-05T19:44:48.097Z] Copying: 235/1024 [MB] (30 MBps) [2024-12-05T19:44:49.034Z] Copying: 267/1024 [MB] (32 MBps) [2024-12-05T19:44:49.979Z] Copying: 297/1024 [MB] (30 MBps) [2024-12-05T19:44:50.959Z] Copying: 327/1024 [MB] (29 MBps) [2024-12-05T19:44:51.897Z] Copying: 358/1024 [MB] (30 MBps) [2024-12-05T19:44:52.847Z] Copying: 389/1024 [MB] (30 MBps) [2024-12-05T19:44:53.780Z] Copying: 418/1024 [MB] (29 MBps) [2024-12-05T19:44:54.715Z] Copying: 448/1024 [MB] (30 MBps) [2024-12-05T19:44:56.086Z] Copying: 479/1024 [MB] (30 MBps) [2024-12-05T19:44:57.024Z] Copying: 509/1024 [MB] (30 MBps) [2024-12-05T19:44:57.958Z] Copying: 539/1024 [MB] (29 MBps) [2024-12-05T19:44:58.891Z] Copying: 573/1024 [MB] (33 MBps) [2024-12-05T19:44:59.824Z] Copying: 603/1024 [MB] (30 MBps) [2024-12-05T19:45:00.758Z] Copying: 633/1024 [MB] (30 MBps) [2024-12-05T19:45:02.130Z] Copying: 664/1024 [MB] (30 MBps) [2024-12-05T19:45:03.065Z] Copying: 693/1024 [MB] (29 MBps) [2024-12-05T19:45:03.998Z] Copying: 723/1024 [MB] (30 MBps) [2024-12-05T19:45:04.930Z] Copying: 753/1024 [MB] (30 MBps) [2024-12-05T19:45:05.860Z] Copying: 784/1024 [MB] (30 MBps) [2024-12-05T19:45:06.844Z] Copying: 814/1024 [MB] (29 MBps) [2024-12-05T19:45:07.775Z] Copying: 844/1024 [MB] (30 MBps) [2024-12-05T19:45:09.145Z] Copying: 875/1024 [MB] (30 MBps) [2024-12-05T19:45:09.734Z] Copying: 904/1024 [MB] (29 MBps) [2024-12-05T19:45:11.130Z] Copying: 934/1024 [MB] (30 MBps) [2024-12-05T19:45:12.063Z] Copying: 964/1024 [MB] (29 MBps) [2024-12-05T19:45:12.996Z] Copying: 994/1024 [MB] (29 MBps) [2024-12-05T19:45:12.996Z] Copying: 1023/1024 [MB] (29 MBps) [2024-12-05T19:45:13.665Z] Copying: 1024/1024 [MB] (average 29 MBps) 00:26:46.410 00:26:46.410 19:45:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:46.410 19:45:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:46.410 19:45:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:46.669 [2024-12-05 19:45:13.818322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.669 [2024-12-05 19:45:13.818372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:46.669 [2024-12-05 19:45:13.818384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:46.669 [2024-12-05 19:45:13.818394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.669 [2024-12-05 19:45:13.818416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:46.669 [2024-12-05 19:45:13.820553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.669 [2024-12-05 19:45:13.820577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:46.669 [2024-12-05 19:45:13.820587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.121 ms 00:26:46.669 [2024-12-05 19:45:13.820594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.669 [2024-12-05 19:45:13.822560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.669 [2024-12-05 19:45:13.822596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:46.669 [2024-12-05 19:45:13.822608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.938 ms 00:26:46.669 [2024-12-05 19:45:13.822615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.669 [2024-12-05 19:45:13.835651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.669 [2024-12-05 19:45:13.835704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:46.669 [2024-12-05 19:45:13.835716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.012 ms 00:26:46.669 [2024-12-05 19:45:13.835723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.669 [2024-12-05 19:45:13.840573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.669 [2024-12-05 19:45:13.840596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:46.669 [2024-12-05 19:45:13.840606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.819 ms 00:26:46.669 [2024-12-05 19:45:13.840613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.669 [2024-12-05 19:45:13.859142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.669 [2024-12-05 19:45:13.859174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:46.669 [2024-12-05 19:45:13.859185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.403 ms 00:26:46.669 [2024-12-05 19:45:13.859192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.669 [2024-12-05 19:45:13.871477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.669 [2024-12-05 19:45:13.871521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:46.669 [2024-12-05 19:45:13.871536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.245 ms 00:26:46.669 [2024-12-05 19:45:13.871542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.669 [2024-12-05 19:45:13.871679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.669 [2024-12-05 19:45:13.871689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:46.669 [2024-12-05 19:45:13.871698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:26:46.669 [2024-12-05 19:45:13.871705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.669 [2024-12-05 19:45:13.890860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.669 [2024-12-05 19:45:13.890898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:46.669 [2024-12-05 19:45:13.890909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.135 ms 00:26:46.669 [2024-12-05 19:45:13.890916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.669 [2024-12-05 19:45:13.909718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.669 [2024-12-05 19:45:13.909760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:46.669 [2024-12-05 19:45:13.909771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.761 ms 00:26:46.669 [2024-12-05 19:45:13.909778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.929 [2024-12-05 19:45:13.927341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.929 [2024-12-05 19:45:13.927389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:46.929 [2024-12-05 19:45:13.927401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.518 ms 00:26:46.929 [2024-12-05 19:45:13.927407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.929 [2024-12-05 19:45:13.945148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.929 [2024-12-05 19:45:13.945191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:46.929 [2024-12-05 19:45:13.945204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.660 ms 00:26:46.929 [2024-12-05 19:45:13.945210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.929 [2024-12-05 19:45:13.945250] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:46.929 [2024-12-05 19:45:13.945261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:46.929 [2024-12-05 19:45:13.945528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:46.930 [2024-12-05 19:45:13.945963] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:46.930 [2024-12-05 19:45:13.945971] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef06c99b-746b-450b-a28e-d2a7acff1c63 00:26:46.930 [2024-12-05 19:45:13.945977] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:46.930 [2024-12-05 19:45:13.945985] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:46.930 [2024-12-05 19:45:13.945993] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:46.930 [2024-12-05 19:45:13.946000] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:46.930 [2024-12-05 19:45:13.946006] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:46.930 [2024-12-05 19:45:13.946013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:46.930 [2024-12-05 19:45:13.946018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:46.930 [2024-12-05 19:45:13.946025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:46.930 [2024-12-05 19:45:13.946030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:46.930 [2024-12-05 19:45:13.946037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.930 [2024-12-05 19:45:13.946043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:46.930 [2024-12-05 19:45:13.946051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:26:46.930 [2024-12-05 19:45:13.946057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.930 [2024-12-05 19:45:13.956019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.930 [2024-12-05 19:45:13.956059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:46.930 [2024-12-05 19:45:13.956069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.929 ms 00:26:46.930 [2024-12-05 19:45:13.956075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.930 [2024-12-05 19:45:13.956362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.930 [2024-12-05 19:45:13.956376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:46.930 [2024-12-05 19:45:13.956384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:26:46.930 [2024-12-05 19:45:13.956390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.930 [2024-12-05 19:45:13.990311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.930 [2024-12-05 19:45:13.990357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:46.930 [2024-12-05 19:45:13.990370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.930 [2024-12-05 19:45:13.990377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.930 [2024-12-05 19:45:13.990444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.930 [2024-12-05 19:45:13.990451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:46.930 [2024-12-05 19:45:13.990458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.930 [2024-12-05 19:45:13.990464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.930 [2024-12-05 19:45:13.990539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.930 [2024-12-05 19:45:13.990547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:46.930 [2024-12-05 19:45:13.990554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.930 [2024-12-05 19:45:13.990560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.930 [2024-12-05 19:45:13.990578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.930 [2024-12-05 19:45:13.990584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:46.931 [2024-12-05 19:45:13.990591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.931 [2024-12-05 19:45:13.990597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.931 [2024-12-05 19:45:14.051534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.931 [2024-12-05 19:45:14.051579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:46.931 [2024-12-05 19:45:14.051591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.931 [2024-12-05 19:45:14.051597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.931 [2024-12-05 19:45:14.100336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.931 [2024-12-05 19:45:14.100378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:46.931 [2024-12-05 19:45:14.100388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.931 [2024-12-05 19:45:14.100395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.931 [2024-12-05 19:45:14.100489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.931 [2024-12-05 19:45:14.100499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:46.931 [2024-12-05 19:45:14.100507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.931 [2024-12-05 19:45:14.100513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.931 [2024-12-05 19:45:14.100551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.931 [2024-12-05 19:45:14.100558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:46.931 [2024-12-05 19:45:14.100566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.931 [2024-12-05 19:45:14.100572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.931 [2024-12-05 19:45:14.100643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.931 [2024-12-05 19:45:14.100651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:46.931 [2024-12-05 19:45:14.100660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.931 [2024-12-05 19:45:14.100666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.931 [2024-12-05 19:45:14.100716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.931 [2024-12-05 19:45:14.100723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:46.931 [2024-12-05 19:45:14.100730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.931 [2024-12-05 19:45:14.100736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.931 [2024-12-05 19:45:14.100767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.931 [2024-12-05 19:45:14.100774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:46.931 [2024-12-05 19:45:14.100783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.931 [2024-12-05 19:45:14.100788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.931 [2024-12-05 19:45:14.100827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.931 [2024-12-05 19:45:14.100834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:46.931 [2024-12-05 19:45:14.100841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.931 [2024-12-05 19:45:14.100847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.931 [2024-12-05 19:45:14.100955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 282.604 ms, result 0 00:26:46.931 true 00:26:46.931 19:45:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79366 00:26:46.931 19:45:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79366 00:26:46.931 19:45:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:47.190 [2024-12-05 19:45:14.215408] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:26:47.190 [2024-12-05 19:45:14.215579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80003 ] 00:26:47.190 [2024-12-05 19:45:14.379782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.449 [2024-12-05 19:45:14.512973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.837  [2024-12-05T19:45:17.025Z] Copying: 194/1024 [MB] (194 MBps) [2024-12-05T19:45:18.003Z] Copying: 411/1024 [MB] (217 MBps) [2024-12-05T19:45:18.941Z] Copying: 662/1024 [MB] (250 MBps) [2024-12-05T19:45:19.511Z] Copying: 911/1024 [MB] (249 MBps) [2024-12-05T19:45:20.077Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:26:52.822 00:26:52.822 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79366 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:52.822 19:45:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:52.822 [2024-12-05 19:45:19.878874] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:26:52.822 [2024-12-05 19:45:19.878995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80067 ] 00:26:52.822 [2024-12-05 19:45:20.035199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.082 [2024-12-05 19:45:20.118454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.082 [2024-12-05 19:45:20.331141] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:53.082 [2024-12-05 19:45:20.331185] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:53.341 [2024-12-05 19:45:20.394418] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:53.341 [2024-12-05 19:45:20.394834] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:53.341 [2024-12-05 19:45:20.394976] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:53.341 [2024-12-05 19:45:20.575921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.341 [2024-12-05 19:45:20.575970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:53.341 [2024-12-05 19:45:20.575983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:53.341 [2024-12-05 19:45:20.575994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.341 [2024-12-05 19:45:20.576045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.341 [2024-12-05 19:45:20.576060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:53.341 [2024-12-05 19:45:20.576073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:53.341 [2024-12-05 19:45:20.576080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.341 [2024-12-05 19:45:20.576100] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:53.341 [2024-12-05 19:45:20.576826] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:53.341 [2024-12-05 19:45:20.576856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.341 [2024-12-05 19:45:20.576864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:53.341 [2024-12-05 19:45:20.576872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:26:53.341 [2024-12-05 19:45:20.576880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.341 [2024-12-05 19:45:20.577948] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:53.341 [2024-12-05 19:45:20.590473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.341 [2024-12-05 19:45:20.590509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:53.341 [2024-12-05 19:45:20.590520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.525 ms 00:26:53.341 [2024-12-05 19:45:20.590528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.341 [2024-12-05 19:45:20.590583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.341 [2024-12-05 19:45:20.590593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:53.341 [2024-12-05 19:45:20.590601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:53.341 [2024-12-05 19:45:20.590607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.646 [2024-12-05 19:45:20.595488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.646 [2024-12-05 19:45:20.595522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:53.646 [2024-12-05 19:45:20.595532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.829 ms 00:26:53.646 [2024-12-05 19:45:20.595540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.646 [2024-12-05 19:45:20.595612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.646 [2024-12-05 19:45:20.595621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:53.646 [2024-12-05 19:45:20.595629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:53.646 [2024-12-05 19:45:20.595636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.646 [2024-12-05 19:45:20.595720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.646 [2024-12-05 19:45:20.595732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:53.646 [2024-12-05 19:45:20.595740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:53.646 [2024-12-05 19:45:20.595747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.646 [2024-12-05 19:45:20.595768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:53.646 [2024-12-05 19:45:20.599299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.646 [2024-12-05 19:45:20.599328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:53.646 [2024-12-05 19:45:20.599338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.535 ms 00:26:53.646 [2024-12-05 19:45:20.599346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.646 [2024-12-05 19:45:20.599377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.646 [2024-12-05 19:45:20.599385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:53.646 [2024-12-05 19:45:20.599393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:53.646 [2024-12-05 19:45:20.599401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.646 [2024-12-05 19:45:20.599423] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:53.646 [2024-12-05 19:45:20.599442] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:53.646 [2024-12-05 19:45:20.599476] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:53.646 [2024-12-05 19:45:20.599491] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:53.646 [2024-12-05 19:45:20.599592] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:53.646 [2024-12-05 19:45:20.599603] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:53.646 [2024-12-05 19:45:20.599613] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:53.646 [2024-12-05 19:45:20.599626] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:53.646 [2024-12-05 19:45:20.599634] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:53.646 [2024-12-05 19:45:20.599642] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:53.646 [2024-12-05 19:45:20.599649] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:53.646 [2024-12-05 19:45:20.599657] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:53.646 [2024-12-05 19:45:20.599664] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:53.646 [2024-12-05 19:45:20.599682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.646 [2024-12-05 19:45:20.599689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:53.646 [2024-12-05 19:45:20.599697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:26:53.646 [2024-12-05 19:45:20.599704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.646 [2024-12-05 19:45:20.599786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.646 [2024-12-05 19:45:20.599796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:53.646 [2024-12-05 19:45:20.599803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:53.646 [2024-12-05 19:45:20.599810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.646 [2024-12-05 19:45:20.599921] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:53.646 [2024-12-05 19:45:20.599931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:53.646 [2024-12-05 19:45:20.599939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:53.646 [2024-12-05 19:45:20.599946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.646 [2024-12-05 19:45:20.599954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:53.646 [2024-12-05 19:45:20.599960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:53.646 [2024-12-05 19:45:20.599967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:53.646 [2024-12-05 19:45:20.599975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:53.646 [2024-12-05 19:45:20.599983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:53.646 [2024-12-05 19:45:20.599995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:53.646 [2024-12-05 19:45:20.600001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:53.646 [2024-12-05 19:45:20.600007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:53.646 [2024-12-05 19:45:20.600013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:53.646 [2024-12-05 19:45:20.600024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:53.646 [2024-12-05 19:45:20.600031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:53.646 [2024-12-05 19:45:20.600037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.646 [2024-12-05 19:45:20.600044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:53.646 [2024-12-05 19:45:20.600050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:53.646 [2024-12-05 19:45:20.600057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.646 [2024-12-05 19:45:20.600063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:53.646 [2024-12-05 19:45:20.600070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:53.646 [2024-12-05 19:45:20.600077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.646 [2024-12-05 19:45:20.600083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:53.646 [2024-12-05 19:45:20.600090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:53.646 [2024-12-05 19:45:20.600096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.646 [2024-12-05 19:45:20.600102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:53.646 [2024-12-05 19:45:20.600108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:53.646 [2024-12-05 19:45:20.600114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.646 [2024-12-05 19:45:20.600120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:53.646 [2024-12-05 19:45:20.600127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:53.646 [2024-12-05 19:45:20.600133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.646 [2024-12-05 19:45:20.600139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:53.646 [2024-12-05 19:45:20.600145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:53.646 [2024-12-05 19:45:20.600152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:53.646 [2024-12-05 19:45:20.600158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:53.646 [2024-12-05 19:45:20.600165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:53.646 [2024-12-05 19:45:20.600171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:53.646 [2024-12-05 19:45:20.600178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:53.646 [2024-12-05 19:45:20.600184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:53.646 [2024-12-05 19:45:20.600190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.646 [2024-12-05 19:45:20.600196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:53.646 [2024-12-05 19:45:20.600203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:53.646 [2024-12-05 19:45:20.600209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.646 [2024-12-05 19:45:20.600215] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:53.646 [2024-12-05 19:45:20.600223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:53.646 [2024-12-05 19:45:20.600234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:53.646 [2024-12-05 19:45:20.600240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.646 [2024-12-05 19:45:20.600248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:53.646 [2024-12-05 19:45:20.600254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:53.646 [2024-12-05 19:45:20.600261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:53.646 [2024-12-05 19:45:20.600267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:53.646 [2024-12-05 19:45:20.600274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:53.646 [2024-12-05 19:45:20.600280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:53.646 [2024-12-05 19:45:20.600288] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:53.646 [2024-12-05 19:45:20.600297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:53.646 [2024-12-05 19:45:20.600305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:53.646 [2024-12-05 19:45:20.600312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:53.646 [2024-12-05 19:45:20.600319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:53.646 [2024-12-05 19:45:20.600326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:53.646 [2024-12-05 19:45:20.600333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:53.646 [2024-12-05 19:45:20.600339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:53.646 [2024-12-05 19:45:20.600346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:53.646 [2024-12-05 19:45:20.600353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:53.646 [2024-12-05 19:45:20.600360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:53.646 [2024-12-05 19:45:20.600366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:53.646 [2024-12-05 19:45:20.600373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:53.646 [2024-12-05 19:45:20.600380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:53.646 [2024-12-05 19:45:20.600387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:53.646 [2024-12-05 19:45:20.600394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:53.646 [2024-12-05 19:45:20.600400] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:53.647 [2024-12-05 19:45:20.600408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:53.647 [2024-12-05 19:45:20.600416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:53.647 [2024-12-05 19:45:20.600423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:53.647 [2024-12-05 19:45:20.600430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:53.647 [2024-12-05 19:45:20.600437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:53.647 [2024-12-05 19:45:20.600445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.600451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:53.647 [2024-12-05 19:45:20.600459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:26:53.647 [2024-12-05 19:45:20.600466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.626833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.626986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:53.647 [2024-12-05 19:45:20.627042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.323 ms 00:26:53.647 [2024-12-05 19:45:20.627065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.627177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.627198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:53.647 [2024-12-05 19:45:20.627253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:53.647 [2024-12-05 19:45:20.627274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.673696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.673901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:53.647 [2024-12-05 19:45:20.673969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.342 ms 00:26:53.647 [2024-12-05 19:45:20.674031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.674103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.674230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:53.647 [2024-12-05 19:45:20.674262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:53.647 [2024-12-05 19:45:20.674311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.674739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.674827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:53.647 [2024-12-05 19:45:20.674877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:26:53.647 [2024-12-05 19:45:20.674929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.675068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.675091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:53.647 [2024-12-05 19:45:20.675136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:26:53.647 [2024-12-05 19:45:20.675157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.688154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.688263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:53.647 [2024-12-05 19:45:20.688310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.967 ms 00:26:53.647 [2024-12-05 19:45:20.688348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.700785] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:53.647 [2024-12-05 19:45:20.700918] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:53.647 [2024-12-05 19:45:20.700983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.700993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:53.647 [2024-12-05 19:45:20.701002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.516 ms 00:26:53.647 [2024-12-05 19:45:20.701010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.725117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.725155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:53.647 [2024-12-05 19:45:20.725167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.071 ms 00:26:53.647 [2024-12-05 19:45:20.725176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.737173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.737212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:53.647 [2024-12-05 19:45:20.737224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.686 ms 00:26:53.647 [2024-12-05 19:45:20.737232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.748234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.748349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:53.647 [2024-12-05 19:45:20.748406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.961 ms 00:26:53.647 [2024-12-05 19:45:20.748428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.749291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.749418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:53.647 [2024-12-05 19:45:20.749478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:26:53.647 [2024-12-05 19:45:20.749523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.804634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.804845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:53.647 [2024-12-05 19:45:20.804901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.070 ms 00:26:53.647 [2024-12-05 19:45:20.804924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.815841] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:53.647 [2024-12-05 19:45:20.818609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.818728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:53.647 [2024-12-05 19:45:20.818785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.629 ms 00:26:53.647 [2024-12-05 19:45:20.818844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.818966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.819004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:53.647 [2024-12-05 19:45:20.819056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:53.647 [2024-12-05 19:45:20.819078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.819164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.819190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:53.647 [2024-12-05 19:45:20.819239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:53.647 [2024-12-05 19:45:20.819260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.819299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.819373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:53.647 [2024-12-05 19:45:20.819397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:53.647 [2024-12-05 19:45:20.819415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.819460] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:53.647 [2024-12-05 19:45:20.819518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.819540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:53.647 [2024-12-05 19:45:20.819560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:53.647 [2024-12-05 19:45:20.819582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.842988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.843120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:53.647 [2024-12-05 19:45:20.843174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.236 ms 00:26:53.647 [2024-12-05 19:45:20.843199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.843338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.647 [2024-12-05 19:45:20.843373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:53.647 [2024-12-05 19:45:20.843455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:53.647 [2024-12-05 19:45:20.843477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.647 [2024-12-05 19:45:20.844439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 268.108 ms, result 0 00:26:55.020  [2024-12-05T19:45:22.863Z] Copying: 44/1024 [MB] (44 MBps) [2024-12-05T19:45:24.242Z] Copying: 83/1024 [MB] (38 MBps) [2024-12-05T19:45:24.861Z] Copying: 120/1024 [MB] (37 MBps) [2024-12-05T19:45:26.239Z] Copying: 155/1024 [MB] (34 MBps) [2024-12-05T19:45:27.179Z] Copying: 179/1024 [MB] (24 MBps) [2024-12-05T19:45:28.113Z] Copying: 208/1024 [MB] (28 MBps) [2024-12-05T19:45:29.049Z] Copying: 254/1024 [MB] (45 MBps) [2024-12-05T19:45:29.981Z] Copying: 298/1024 [MB] (44 MBps) [2024-12-05T19:45:30.915Z] Copying: 343/1024 [MB] (45 MBps) [2024-12-05T19:45:32.357Z] Copying: 388/1024 [MB] (45 MBps) [2024-12-05T19:45:32.928Z] Copying: 433/1024 [MB] (44 MBps) [2024-12-05T19:45:33.868Z] Copying: 478/1024 [MB] (45 MBps) [2024-12-05T19:45:35.251Z] Copying: 524/1024 [MB] (45 MBps) [2024-12-05T19:45:36.183Z] Copying: 570/1024 [MB] (45 MBps) [2024-12-05T19:45:37.115Z] Copying: 614/1024 [MB] (43 MBps) [2024-12-05T19:45:38.047Z] Copying: 655/1024 [MB] (41 MBps) [2024-12-05T19:45:38.980Z] Copying: 698/1024 [MB] (43 MBps) [2024-12-05T19:45:39.916Z] Copying: 744/1024 [MB] (45 MBps) [2024-12-05T19:45:40.866Z] Copying: 788/1024 [MB] (44 MBps) [2024-12-05T19:45:42.236Z] Copying: 827/1024 [MB] (39 MBps) [2024-12-05T19:45:43.172Z] Copying: 880/1024 [MB] (52 MBps) [2024-12-05T19:45:44.103Z] Copying: 912/1024 [MB] (31 MBps) [2024-12-05T19:45:45.076Z] Copying: 954/1024 [MB] (42 MBps) [2024-12-05T19:45:46.008Z] Copying: 983/1024 [MB] (28 MBps) [2024-12-05T19:45:46.945Z] Copying: 1023/1024 [MB] (39 MBps) [2024-12-05T19:45:46.945Z] Copying: 1024/1024 [MB] (average 39 MBps)[2024-12-05 19:45:46.859217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.690 [2024-12-05 19:45:46.859273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:19.690 [2024-12-05 19:45:46.859287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:19.690 [2024-12-05 19:45:46.859296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.690 [2024-12-05 19:45:46.862150] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:19.690 [2024-12-05 19:45:46.865784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.690 [2024-12-05 19:45:46.865817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:19.690 [2024-12-05 19:45:46.865828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.497 ms 00:27:19.690 [2024-12-05 19:45:46.865843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.690 [2024-12-05 19:45:46.877678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.690 [2024-12-05 19:45:46.877722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:19.690 [2024-12-05 19:45:46.877734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.825 ms 00:27:19.690 [2024-12-05 19:45:46.877742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.690 [2024-12-05 19:45:46.896760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.690 [2024-12-05 19:45:46.896915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:19.690 [2024-12-05 19:45:46.896934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.001 ms 00:27:19.690 [2024-12-05 19:45:46.896942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.690 [2024-12-05 19:45:46.903129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.690 [2024-12-05 19:45:46.903164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:19.690 [2024-12-05 19:45:46.903177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.149 ms 00:27:19.690 [2024-12-05 19:45:46.903186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.690 [2024-12-05 19:45:46.926922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.690 [2024-12-05 19:45:46.926973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:19.690 [2024-12-05 19:45:46.926986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.689 ms 00:27:19.690 [2024-12-05 19:45:46.926994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.690 [2024-12-05 19:45:46.941045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.690 [2024-12-05 19:45:46.941092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:19.690 [2024-12-05 19:45:46.941106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.004 ms 00:27:19.690 [2024-12-05 19:45:46.941114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.949 [2024-12-05 19:45:46.998236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.949 [2024-12-05 19:45:46.998315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:19.949 [2024-12-05 19:45:46.998339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.069 ms 00:27:19.949 [2024-12-05 19:45:46.998347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.949 [2024-12-05 19:45:47.022564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.949 [2024-12-05 19:45:47.022613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:19.949 [2024-12-05 19:45:47.022626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.200 ms 00:27:19.949 [2024-12-05 19:45:47.022646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.949 [2024-12-05 19:45:47.045577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.949 [2024-12-05 19:45:47.045617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:19.949 [2024-12-05 19:45:47.045629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.871 ms 00:27:19.949 [2024-12-05 19:45:47.045636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.949 [2024-12-05 19:45:47.068388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.949 [2024-12-05 19:45:47.068534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:19.949 [2024-12-05 19:45:47.068552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.717 ms 00:27:19.949 [2024-12-05 19:45:47.068559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.949 [2024-12-05 19:45:47.090319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.949 [2024-12-05 19:45:47.090351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:19.949 [2024-12-05 19:45:47.090361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.708 ms 00:27:19.949 [2024-12-05 19:45:47.090369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.949 [2024-12-05 19:45:47.090401] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:19.950 [2024-12-05 19:45:47.090415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 124672 / 261120 wr_cnt: 1 state: open 00:27:19.950 [2024-12-05 19:45:47.090426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.090994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.091001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.091008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.091015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.091022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:19.950 [2024-12-05 19:45:47.091029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:19.951 [2024-12-05 19:45:47.091204] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:19.951 [2024-12-05 19:45:47.091212] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef06c99b-746b-450b-a28e-d2a7acff1c63 00:27:19.951 [2024-12-05 19:45:47.091230] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 124672 00:27:19.951 [2024-12-05 19:45:47.091238] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 125632 00:27:19.951 [2024-12-05 19:45:47.091245] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 124672 00:27:19.951 [2024-12-05 19:45:47.091253] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0077 00:27:19.951 [2024-12-05 19:45:47.091260] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:19.951 [2024-12-05 19:45:47.091267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:19.951 [2024-12-05 19:45:47.091274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:19.951 [2024-12-05 19:45:47.091281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:19.951 [2024-12-05 19:45:47.091287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:19.951 [2024-12-05 19:45:47.091294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.951 [2024-12-05 19:45:47.091302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:19.951 [2024-12-05 19:45:47.091310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:27:19.951 [2024-12-05 19:45:47.091317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.951 [2024-12-05 19:45:47.103620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.951 [2024-12-05 19:45:47.103651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:19.951 [2024-12-05 19:45:47.103661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.287 ms 00:27:19.951 [2024-12-05 19:45:47.103694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.951 [2024-12-05 19:45:47.104031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.951 [2024-12-05 19:45:47.104043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:19.951 [2024-12-05 19:45:47.104056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:27:19.951 [2024-12-05 19:45:47.104063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.951 [2024-12-05 19:45:47.136575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.951 [2024-12-05 19:45:47.136615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.951 [2024-12-05 19:45:47.136625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.951 [2024-12-05 19:45:47.136632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.951 [2024-12-05 19:45:47.136721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.951 [2024-12-05 19:45:47.136730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.951 [2024-12-05 19:45:47.136742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.951 [2024-12-05 19:45:47.136749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.951 [2024-12-05 19:45:47.136827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.951 [2024-12-05 19:45:47.136837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.951 [2024-12-05 19:45:47.136844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.951 [2024-12-05 19:45:47.136852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.951 [2024-12-05 19:45:47.136867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.951 [2024-12-05 19:45:47.136875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.951 [2024-12-05 19:45:47.136882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.951 [2024-12-05 19:45:47.136889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.212 [2024-12-05 19:45:47.214149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.212 [2024-12-05 19:45:47.214349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:20.212 [2024-12-05 19:45:47.214367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.212 [2024-12-05 19:45:47.214375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.212 [2024-12-05 19:45:47.279133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.212 [2024-12-05 19:45:47.279188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:20.212 [2024-12-05 19:45:47.279199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.212 [2024-12-05 19:45:47.279215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.212 [2024-12-05 19:45:47.279268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.212 [2024-12-05 19:45:47.279276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:20.212 [2024-12-05 19:45:47.279285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.212 [2024-12-05 19:45:47.279292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.212 [2024-12-05 19:45:47.279342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.212 [2024-12-05 19:45:47.279350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:20.212 [2024-12-05 19:45:47.279358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.212 [2024-12-05 19:45:47.279365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.212 [2024-12-05 19:45:47.279452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.212 [2024-12-05 19:45:47.279461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:20.212 [2024-12-05 19:45:47.279469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.212 [2024-12-05 19:45:47.279476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.212 [2024-12-05 19:45:47.279504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.212 [2024-12-05 19:45:47.279513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:20.212 [2024-12-05 19:45:47.279520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.212 [2024-12-05 19:45:47.279527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.212 [2024-12-05 19:45:47.279564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.212 [2024-12-05 19:45:47.279573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:20.212 [2024-12-05 19:45:47.279581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.212 [2024-12-05 19:45:47.279588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.212 [2024-12-05 19:45:47.279629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.212 [2024-12-05 19:45:47.279638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:20.212 [2024-12-05 19:45:47.279646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.212 [2024-12-05 19:45:47.279654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.212 [2024-12-05 19:45:47.279793] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 423.650 ms, result 0 00:27:21.595 00:27:21.595 00:27:21.595 19:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:24.143 19:45:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:24.143 [2024-12-05 19:45:51.143699] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:27:24.143 [2024-12-05 19:45:51.143827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80382 ] 00:27:24.143 [2024-12-05 19:45:51.305271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.404 [2024-12-05 19:45:51.437469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.665 [2024-12-05 19:45:51.740081] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:24.665 [2024-12-05 19:45:51.740345] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:24.665 [2024-12-05 19:45:51.904402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.665 [2024-12-05 19:45:51.904476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:24.665 [2024-12-05 19:45:51.904493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:24.665 [2024-12-05 19:45:51.904502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.665 [2024-12-05 19:45:51.904562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.665 [2024-12-05 19:45:51.904575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:24.665 [2024-12-05 19:45:51.904584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:24.665 [2024-12-05 19:45:51.904593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.665 [2024-12-05 19:45:51.904614] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:24.665 [2024-12-05 19:45:51.905404] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:24.665 [2024-12-05 19:45:51.905433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.665 [2024-12-05 19:45:51.905441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:24.665 [2024-12-05 19:45:51.905451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:27:24.665 [2024-12-05 19:45:51.905459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.665 [2024-12-05 19:45:51.907300] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:24.973 [2024-12-05 19:45:51.922432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.973 [2024-12-05 19:45:51.922484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:24.973 [2024-12-05 19:45:51.922499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.134 ms 00:27:24.973 [2024-12-05 19:45:51.922509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.973 [2024-12-05 19:45:51.922600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.973 [2024-12-05 19:45:51.922611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:24.973 [2024-12-05 19:45:51.922620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:24.973 [2024-12-05 19:45:51.922628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.973 [2024-12-05 19:45:51.931335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.973 [2024-12-05 19:45:51.931382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:24.973 [2024-12-05 19:45:51.931393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.597 ms 00:27:24.973 [2024-12-05 19:45:51.931409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.973 [2024-12-05 19:45:51.931497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.973 [2024-12-05 19:45:51.931506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:24.973 [2024-12-05 19:45:51.931516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:24.973 [2024-12-05 19:45:51.931523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.973 [2024-12-05 19:45:51.931571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.973 [2024-12-05 19:45:51.931582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:24.973 [2024-12-05 19:45:51.931590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:24.973 [2024-12-05 19:45:51.931598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.973 [2024-12-05 19:45:51.931627] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:24.973 [2024-12-05 19:45:51.935735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.973 [2024-12-05 19:45:51.935774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:24.974 [2024-12-05 19:45:51.935789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.114 ms 00:27:24.974 [2024-12-05 19:45:51.935797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.974 [2024-12-05 19:45:51.935838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.974 [2024-12-05 19:45:51.935848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:24.974 [2024-12-05 19:45:51.935857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:24.974 [2024-12-05 19:45:51.935865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.974 [2024-12-05 19:45:51.935919] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:24.974 [2024-12-05 19:45:51.935948] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:24.974 [2024-12-05 19:45:51.935987] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:24.974 [2024-12-05 19:45:51.936007] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:24.974 [2024-12-05 19:45:51.936116] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:24.974 [2024-12-05 19:45:51.936128] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:24.974 [2024-12-05 19:45:51.936140] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:24.974 [2024-12-05 19:45:51.936152] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:24.974 [2024-12-05 19:45:51.936161] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:24.974 [2024-12-05 19:45:51.936170] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:24.974 [2024-12-05 19:45:51.936179] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:24.974 [2024-12-05 19:45:51.936191] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:24.974 [2024-12-05 19:45:51.936199] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:24.974 [2024-12-05 19:45:51.936207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.974 [2024-12-05 19:45:51.936215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:24.974 [2024-12-05 19:45:51.936223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:27:24.974 [2024-12-05 19:45:51.936231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.974 [2024-12-05 19:45:51.936314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.974 [2024-12-05 19:45:51.936323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:24.974 [2024-12-05 19:45:51.936331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:24.974 [2024-12-05 19:45:51.936338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.974 [2024-12-05 19:45:51.936445] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:24.974 [2024-12-05 19:45:51.936457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:24.974 [2024-12-05 19:45:51.936466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:24.974 [2024-12-05 19:45:51.936475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:24.974 [2024-12-05 19:45:51.936490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:24.974 [2024-12-05 19:45:51.936507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:24.974 [2024-12-05 19:45:51.936514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:24.974 [2024-12-05 19:45:51.936530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:24.974 [2024-12-05 19:45:51.936537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:24.974 [2024-12-05 19:45:51.936543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:24.974 [2024-12-05 19:45:51.936558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:24.974 [2024-12-05 19:45:51.936564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:24.974 [2024-12-05 19:45:51.936571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:24.974 [2024-12-05 19:45:51.936588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:24.974 [2024-12-05 19:45:51.936595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:24.974 [2024-12-05 19:45:51.936609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:24.974 [2024-12-05 19:45:51.936624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:24.974 [2024-12-05 19:45:51.936630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:24.974 [2024-12-05 19:45:51.936643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:24.974 [2024-12-05 19:45:51.936650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:24.974 [2024-12-05 19:45:51.936716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:24.974 [2024-12-05 19:45:51.936724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:24.974 [2024-12-05 19:45:51.936739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:24.974 [2024-12-05 19:45:51.936746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:24.974 [2024-12-05 19:45:51.936761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:24.974 [2024-12-05 19:45:51.936768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:24.974 [2024-12-05 19:45:51.936775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:24.974 [2024-12-05 19:45:51.936783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:24.974 [2024-12-05 19:45:51.936791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:24.974 [2024-12-05 19:45:51.936798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:24.974 [2024-12-05 19:45:51.936832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:24.974 [2024-12-05 19:45:51.936839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936858] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:24.974 [2024-12-05 19:45:51.936866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:24.974 [2024-12-05 19:45:51.936874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:24.974 [2024-12-05 19:45:51.936882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.974 [2024-12-05 19:45:51.936890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:24.974 [2024-12-05 19:45:51.936898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:24.974 [2024-12-05 19:45:51.936906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:24.974 [2024-12-05 19:45:51.936914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:24.974 [2024-12-05 19:45:51.936921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:24.974 [2024-12-05 19:45:51.936929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:24.974 [2024-12-05 19:45:51.936938] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:24.974 [2024-12-05 19:45:51.936947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:24.974 [2024-12-05 19:45:51.936960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:24.974 [2024-12-05 19:45:51.936967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:24.974 [2024-12-05 19:45:51.936974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:24.974 [2024-12-05 19:45:51.936982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:24.974 [2024-12-05 19:45:51.936989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:24.974 [2024-12-05 19:45:51.936996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:24.974 [2024-12-05 19:45:51.937002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:24.974 [2024-12-05 19:45:51.937009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:24.975 [2024-12-05 19:45:51.937016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:24.975 [2024-12-05 19:45:51.937023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:24.975 [2024-12-05 19:45:51.937030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:24.975 [2024-12-05 19:45:51.937037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:24.975 [2024-12-05 19:45:51.937044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:24.975 [2024-12-05 19:45:51.937052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:24.975 [2024-12-05 19:45:51.937059] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:24.975 [2024-12-05 19:45:51.937069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:24.975 [2024-12-05 19:45:51.937077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:24.975 [2024-12-05 19:45:51.937084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:24.975 [2024-12-05 19:45:51.937091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:24.975 [2024-12-05 19:45:51.937099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:24.975 [2024-12-05 19:45:51.937106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:51.937114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:24.975 [2024-12-05 19:45:51.937121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:27:24.975 [2024-12-05 19:45:51.937128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:51.970047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:51.970100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:24.975 [2024-12-05 19:45:51.970112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.869 ms 00:27:24.975 [2024-12-05 19:45:51.970124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:51.970212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:51.970220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:24.975 [2024-12-05 19:45:51.970229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:24.975 [2024-12-05 19:45:51.970238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.019658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.019747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:24.975 [2024-12-05 19:45:52.019766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.349 ms 00:27:24.975 [2024-12-05 19:45:52.019779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.019855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.019869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:24.975 [2024-12-05 19:45:52.019888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:24.975 [2024-12-05 19:45:52.019900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.020512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.020536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:24.975 [2024-12-05 19:45:52.020548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:27:24.975 [2024-12-05 19:45:52.020556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.020787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.020801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:24.975 [2024-12-05 19:45:52.020818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:27:24.975 [2024-12-05 19:45:52.020827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.036769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.036818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:24.975 [2024-12-05 19:45:52.036830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.917 ms 00:27:24.975 [2024-12-05 19:45:52.036839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.051123] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:24.975 [2024-12-05 19:45:52.051321] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:24.975 [2024-12-05 19:45:52.051341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.051351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:24.975 [2024-12-05 19:45:52.051361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.381 ms 00:27:24.975 [2024-12-05 19:45:52.051371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.077885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.078143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:24.975 [2024-12-05 19:45:52.078169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.389 ms 00:27:24.975 [2024-12-05 19:45:52.078180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.092309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.092377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:24.975 [2024-12-05 19:45:52.092393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.939 ms 00:27:24.975 [2024-12-05 19:45:52.092401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.104929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.104981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:24.975 [2024-12-05 19:45:52.104995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.467 ms 00:27:24.975 [2024-12-05 19:45:52.105003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.105710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.105734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:24.975 [2024-12-05 19:45:52.105749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:27:24.975 [2024-12-05 19:45:52.105757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.172331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.172634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:24.975 [2024-12-05 19:45:52.172699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.550 ms 00:27:24.975 [2024-12-05 19:45:52.172710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.185662] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:24.975 [2024-12-05 19:45:52.189748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.189791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:24.975 [2024-12-05 19:45:52.189807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.887 ms 00:27:24.975 [2024-12-05 19:45:52.189817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.189946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.189958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:24.975 [2024-12-05 19:45:52.189973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:24.975 [2024-12-05 19:45:52.189983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.192009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.192057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:24.975 [2024-12-05 19:45:52.192068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.986 ms 00:27:24.975 [2024-12-05 19:45:52.192077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.192119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.192129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:24.975 [2024-12-05 19:45:52.192139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:24.975 [2024-12-05 19:45:52.192148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.975 [2024-12-05 19:45:52.192194] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:24.975 [2024-12-05 19:45:52.192206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.975 [2024-12-05 19:45:52.192216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:24.976 [2024-12-05 19:45:52.192225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:24.976 [2024-12-05 19:45:52.192235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.976 [2024-12-05 19:45:52.219436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.976 [2024-12-05 19:45:52.219623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:24.976 [2024-12-05 19:45:52.219653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.181 ms 00:27:24.976 [2024-12-05 19:45:52.219663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.976 [2024-12-05 19:45:52.219853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.976 [2024-12-05 19:45:52.219881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:24.976 [2024-12-05 19:45:52.219892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:24.976 [2024-12-05 19:45:52.219901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.976 [2024-12-05 19:45:52.221280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 316.372 ms, result 0 00:27:26.363  [2024-12-05T19:45:54.561Z] Copying: 1856/1048576 [kB] (1856 kBps) [2024-12-05T19:45:55.504Z] Copying: 6276/1048576 [kB] (4420 kBps) [2024-12-05T19:45:56.444Z] Copying: 16440/1048576 [kB] (10164 kBps) [2024-12-05T19:45:57.830Z] Copying: 31/1024 [MB] (15 MBps) [2024-12-05T19:45:58.776Z] Copying: 45/1024 [MB] (14 MBps) [2024-12-05T19:45:59.719Z] Copying: 60/1024 [MB] (14 MBps) [2024-12-05T19:46:00.661Z] Copying: 74/1024 [MB] (13 MBps) [2024-12-05T19:46:01.607Z] Copying: 87/1024 [MB] (13 MBps) [2024-12-05T19:46:02.550Z] Copying: 101/1024 [MB] (13 MBps) [2024-12-05T19:46:03.493Z] Copying: 115/1024 [MB] (14 MBps) [2024-12-05T19:46:04.437Z] Copying: 130/1024 [MB] (14 MBps) [2024-12-05T19:46:05.823Z] Copying: 144/1024 [MB] (14 MBps) [2024-12-05T19:46:06.764Z] Copying: 161/1024 [MB] (16 MBps) [2024-12-05T19:46:07.706Z] Copying: 179/1024 [MB] (17 MBps) [2024-12-05T19:46:08.647Z] Copying: 195/1024 [MB] (16 MBps) [2024-12-05T19:46:09.605Z] Copying: 212/1024 [MB] (16 MBps) [2024-12-05T19:46:10.549Z] Copying: 229/1024 [MB] (16 MBps) [2024-12-05T19:46:11.493Z] Copying: 246/1024 [MB] (16 MBps) [2024-12-05T19:46:12.437Z] Copying: 261/1024 [MB] (15 MBps) [2024-12-05T19:46:13.820Z] Copying: 278/1024 [MB] (16 MBps) [2024-12-05T19:46:14.756Z] Copying: 307/1024 [MB] (28 MBps) [2024-12-05T19:46:15.698Z] Copying: 332/1024 [MB] (24 MBps) [2024-12-05T19:46:16.738Z] Copying: 359/1024 [MB] (27 MBps) [2024-12-05T19:46:17.682Z] Copying: 382/1024 [MB] (22 MBps) [2024-12-05T19:46:18.621Z] Copying: 415/1024 [MB] (32 MBps) [2024-12-05T19:46:19.560Z] Copying: 443/1024 [MB] (27 MBps) [2024-12-05T19:46:20.494Z] Copying: 473/1024 [MB] (30 MBps) [2024-12-05T19:46:21.432Z] Copying: 503/1024 [MB] (30 MBps) [2024-12-05T19:46:22.832Z] Copying: 540/1024 [MB] (36 MBps) [2024-12-05T19:46:23.771Z] Copying: 573/1024 [MB] (32 MBps) [2024-12-05T19:46:24.772Z] Copying: 603/1024 [MB] (30 MBps) [2024-12-05T19:46:25.710Z] Copying: 633/1024 [MB] (29 MBps) [2024-12-05T19:46:26.651Z] Copying: 663/1024 [MB] (30 MBps) [2024-12-05T19:46:27.588Z] Copying: 697/1024 [MB] (33 MBps) [2024-12-05T19:46:28.605Z] Copying: 741/1024 [MB] (44 MBps) [2024-12-05T19:46:29.546Z] Copying: 786/1024 [MB] (44 MBps) [2024-12-05T19:46:30.485Z] Copying: 827/1024 [MB] (40 MBps) [2024-12-05T19:46:31.429Z] Copying: 854/1024 [MB] (26 MBps) [2024-12-05T19:46:32.826Z] Copying: 874/1024 [MB] (20 MBps) [2024-12-05T19:46:33.766Z] Copying: 894/1024 [MB] (19 MBps) [2024-12-05T19:46:34.709Z] Copying: 917/1024 [MB] (22 MBps) [2024-12-05T19:46:35.684Z] Copying: 941/1024 [MB] (24 MBps) [2024-12-05T19:46:36.696Z] Copying: 965/1024 [MB] (23 MBps) [2024-12-05T19:46:37.639Z] Copying: 991/1024 [MB] (25 MBps) [2024-12-05T19:46:37.902Z] Copying: 1015/1024 [MB] (24 MBps) [2024-12-05T19:46:37.902Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-05 19:46:37.718854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.647 [2024-12-05 19:46:37.718921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:10.647 [2024-12-05 19:46:37.718936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:10.647 [2024-12-05 19:46:37.718946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.647 [2024-12-05 19:46:37.718970] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:10.647 [2024-12-05 19:46:37.722479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.647 [2024-12-05 19:46:37.722516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:10.647 [2024-12-05 19:46:37.722528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.492 ms 00:28:10.647 [2024-12-05 19:46:37.722536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.647 [2024-12-05 19:46:37.722785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.647 [2024-12-05 19:46:37.722802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:10.647 [2024-12-05 19:46:37.722811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:28:10.647 [2024-12-05 19:46:37.722819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.647 [2024-12-05 19:46:37.733237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.647 [2024-12-05 19:46:37.733272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:10.647 [2024-12-05 19:46:37.733283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.402 ms 00:28:10.647 [2024-12-05 19:46:37.733290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.647 [2024-12-05 19:46:37.739480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.647 [2024-12-05 19:46:37.739509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:10.647 [2024-12-05 19:46:37.739525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.165 ms 00:28:10.647 [2024-12-05 19:46:37.739533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.647 [2024-12-05 19:46:37.764074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.647 [2024-12-05 19:46:37.764109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:10.647 [2024-12-05 19:46:37.764120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.494 ms 00:28:10.647 [2024-12-05 19:46:37.764128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.647 [2024-12-05 19:46:37.778370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.647 [2024-12-05 19:46:37.778404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:10.647 [2024-12-05 19:46:37.778416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.210 ms 00:28:10.647 [2024-12-05 19:46:37.778424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.647 [2024-12-05 19:46:37.779948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.647 [2024-12-05 19:46:37.779979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:10.647 [2024-12-05 19:46:37.779988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:28:10.647 [2024-12-05 19:46:37.780001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.647 [2024-12-05 19:46:37.802611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.647 [2024-12-05 19:46:37.802785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:10.647 [2024-12-05 19:46:37.802802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.596 ms 00:28:10.647 [2024-12-05 19:46:37.802810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.647 [2024-12-05 19:46:37.825614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.647 [2024-12-05 19:46:37.825648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:10.647 [2024-12-05 19:46:37.825659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.775 ms 00:28:10.647 [2024-12-05 19:46:37.825667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.647 [2024-12-05 19:46:37.848269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.647 [2024-12-05 19:46:37.848306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:10.647 [2024-12-05 19:46:37.848317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.553 ms 00:28:10.647 [2024-12-05 19:46:37.848326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.647 [2024-12-05 19:46:37.870627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.647 [2024-12-05 19:46:37.870661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:10.647 [2024-12-05 19:46:37.870685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.244 ms 00:28:10.647 [2024-12-05 19:46:37.870693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.647 [2024-12-05 19:46:37.870724] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:10.647 [2024-12-05 19:46:37.870739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:10.647 [2024-12-05 19:46:37.870751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:10.647 [2024-12-05 19:46:37.870759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:10.647 [2024-12-05 19:46:37.870916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.870923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.870930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.870938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.870945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.870954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.870962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.870969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.870976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.870983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.870990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.870998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:10.648 [2024-12-05 19:46:37.871497] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:10.648 [2024-12-05 19:46:37.871505] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef06c99b-746b-450b-a28e-d2a7acff1c63 00:28:10.648 [2024-12-05 19:46:37.871512] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:10.648 [2024-12-05 19:46:37.871520] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 139968 00:28:10.648 [2024-12-05 19:46:37.871531] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 137984 00:28:10.648 [2024-12-05 19:46:37.871540] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0144 00:28:10.648 [2024-12-05 19:46:37.871548] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:10.648 [2024-12-05 19:46:37.871563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:10.648 [2024-12-05 19:46:37.871571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:10.648 [2024-12-05 19:46:37.871578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:10.648 [2024-12-05 19:46:37.871584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:10.648 [2024-12-05 19:46:37.871591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.648 [2024-12-05 19:46:37.871598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:10.648 [2024-12-05 19:46:37.871606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:28:10.648 [2024-12-05 19:46:37.871613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.648 [2024-12-05 19:46:37.883790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.648 [2024-12-05 19:46:37.883924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:10.648 [2024-12-05 19:46:37.883939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.161 ms 00:28:10.649 [2024-12-05 19:46:37.883947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.649 [2024-12-05 19:46:37.884283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.649 [2024-12-05 19:46:37.884291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:10.649 [2024-12-05 19:46:37.884299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:28:10.649 [2024-12-05 19:46:37.884306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.908 [2024-12-05 19:46:37.916981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.908 [2024-12-05 19:46:37.917016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:10.908 [2024-12-05 19:46:37.917026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.908 [2024-12-05 19:46:37.917034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.908 [2024-12-05 19:46:37.917089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.908 [2024-12-05 19:46:37.917098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:10.908 [2024-12-05 19:46:37.917106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.908 [2024-12-05 19:46:37.917113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.908 [2024-12-05 19:46:37.917171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.908 [2024-12-05 19:46:37.917181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:10.909 [2024-12-05 19:46:37.917188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.909 [2024-12-05 19:46:37.917195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.909 [2024-12-05 19:46:37.917210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.909 [2024-12-05 19:46:37.917217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:10.909 [2024-12-05 19:46:37.917224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.909 [2024-12-05 19:46:37.917231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.909 [2024-12-05 19:46:37.994587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.909 [2024-12-05 19:46:37.994810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:10.909 [2024-12-05 19:46:37.994828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.909 [2024-12-05 19:46:37.994837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.909 [2024-12-05 19:46:38.058431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.909 [2024-12-05 19:46:38.058609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:10.909 [2024-12-05 19:46:38.058624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.909 [2024-12-05 19:46:38.058633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.909 [2024-12-05 19:46:38.058731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.909 [2024-12-05 19:46:38.058746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:10.909 [2024-12-05 19:46:38.058754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.909 [2024-12-05 19:46:38.058762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.909 [2024-12-05 19:46:38.058794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.909 [2024-12-05 19:46:38.058802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:10.909 [2024-12-05 19:46:38.058810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.909 [2024-12-05 19:46:38.058817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.909 [2024-12-05 19:46:38.058904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.909 [2024-12-05 19:46:38.058917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:10.909 [2024-12-05 19:46:38.058925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.909 [2024-12-05 19:46:38.058933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.909 [2024-12-05 19:46:38.058960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.909 [2024-12-05 19:46:38.058970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:10.909 [2024-12-05 19:46:38.058977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.909 [2024-12-05 19:46:38.058985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.909 [2024-12-05 19:46:38.059016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.909 [2024-12-05 19:46:38.059025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:10.909 [2024-12-05 19:46:38.059035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.909 [2024-12-05 19:46:38.059042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.909 [2024-12-05 19:46:38.059080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.909 [2024-12-05 19:46:38.059089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:10.909 [2024-12-05 19:46:38.059097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.909 [2024-12-05 19:46:38.059104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.909 [2024-12-05 19:46:38.059211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.344 ms, result 0 00:28:11.848 00:28:11.848 00:28:12.106 19:46:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:14.091 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:14.091 19:46:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:14.091 [2024-12-05 19:46:41.333578] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:28:14.091 [2024-12-05 19:46:41.333718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80893 ] 00:28:14.349 [2024-12-05 19:46:41.493458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.350 [2024-12-05 19:46:41.596019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.609 [2024-12-05 19:46:41.860014] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:14.609 [2024-12-05 19:46:41.860079] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:14.871 [2024-12-05 19:46:42.014376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.871 [2024-12-05 19:46:42.014429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:14.871 [2024-12-05 19:46:42.014441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:14.871 [2024-12-05 19:46:42.014450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.871 [2024-12-05 19:46:42.014499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.871 [2024-12-05 19:46:42.014511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:14.871 [2024-12-05 19:46:42.014520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:14.871 [2024-12-05 19:46:42.014527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.871 [2024-12-05 19:46:42.014547] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:14.871 [2024-12-05 19:46:42.015261] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:14.871 [2024-12-05 19:46:42.015657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.871 [2024-12-05 19:46:42.015679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:14.871 [2024-12-05 19:46:42.015689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:28:14.871 [2024-12-05 19:46:42.015697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.871 [2024-12-05 19:46:42.016873] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:14.871 [2024-12-05 19:46:42.029546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.871 [2024-12-05 19:46:42.029581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:14.871 [2024-12-05 19:46:42.029594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.674 ms 00:28:14.871 [2024-12-05 19:46:42.029604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.871 [2024-12-05 19:46:42.029665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.871 [2024-12-05 19:46:42.029691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:14.871 [2024-12-05 19:46:42.029700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:14.871 [2024-12-05 19:46:42.029708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.871 [2024-12-05 19:46:42.034880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.871 [2024-12-05 19:46:42.034910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:14.871 [2024-12-05 19:46:42.034919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.110 ms 00:28:14.871 [2024-12-05 19:46:42.034931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.871 [2024-12-05 19:46:42.035000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.871 [2024-12-05 19:46:42.035010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:14.871 [2024-12-05 19:46:42.035018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:14.871 [2024-12-05 19:46:42.035026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.871 [2024-12-05 19:46:42.035067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.871 [2024-12-05 19:46:42.035077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:14.871 [2024-12-05 19:46:42.035085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:14.871 [2024-12-05 19:46:42.035093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.871 [2024-12-05 19:46:42.035119] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:14.871 [2024-12-05 19:46:42.038441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.871 [2024-12-05 19:46:42.038469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:14.871 [2024-12-05 19:46:42.038482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:28:14.871 [2024-12-05 19:46:42.038490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.871 [2024-12-05 19:46:42.038520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.871 [2024-12-05 19:46:42.038529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:14.871 [2024-12-05 19:46:42.038537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:14.871 [2024-12-05 19:46:42.038544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.871 [2024-12-05 19:46:42.038563] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:14.871 [2024-12-05 19:46:42.038584] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:14.871 [2024-12-05 19:46:42.038620] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:14.871 [2024-12-05 19:46:42.038637] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:14.871 [2024-12-05 19:46:42.038755] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:14.871 [2024-12-05 19:46:42.038768] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:14.871 [2024-12-05 19:46:42.038778] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:14.871 [2024-12-05 19:46:42.038789] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:14.871 [2024-12-05 19:46:42.038797] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:14.871 [2024-12-05 19:46:42.038805] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:14.871 [2024-12-05 19:46:42.038813] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:14.871 [2024-12-05 19:46:42.038824] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:14.871 [2024-12-05 19:46:42.038832] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:14.871 [2024-12-05 19:46:42.038840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.871 [2024-12-05 19:46:42.038848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:14.871 [2024-12-05 19:46:42.038856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:28:14.871 [2024-12-05 19:46:42.038862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.871 [2024-12-05 19:46:42.038945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.871 [2024-12-05 19:46:42.038954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:14.871 [2024-12-05 19:46:42.038962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:14.871 [2024-12-05 19:46:42.038969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.871 [2024-12-05 19:46:42.039085] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:14.871 [2024-12-05 19:46:42.039097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:14.871 [2024-12-05 19:46:42.039105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:14.871 [2024-12-05 19:46:42.039113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:14.871 [2024-12-05 19:46:42.039121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:14.871 [2024-12-05 19:46:42.039128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:14.871 [2024-12-05 19:46:42.039135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:14.871 [2024-12-05 19:46:42.039142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:14.871 [2024-12-05 19:46:42.039150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:14.871 [2024-12-05 19:46:42.039156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:14.871 [2024-12-05 19:46:42.039164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:14.871 [2024-12-05 19:46:42.039171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:14.871 [2024-12-05 19:46:42.039177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:14.871 [2024-12-05 19:46:42.039189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:14.871 [2024-12-05 19:46:42.039196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:14.871 [2024-12-05 19:46:42.039202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:14.871 [2024-12-05 19:46:42.039208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:14.871 [2024-12-05 19:46:42.039215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:14.871 [2024-12-05 19:46:42.039224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:14.871 [2024-12-05 19:46:42.039230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:14.871 [2024-12-05 19:46:42.039237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:14.872 [2024-12-05 19:46:42.039244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:14.872 [2024-12-05 19:46:42.039250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:14.872 [2024-12-05 19:46:42.039256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:14.872 [2024-12-05 19:46:42.039262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:14.872 [2024-12-05 19:46:42.039269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:14.872 [2024-12-05 19:46:42.039275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:14.872 [2024-12-05 19:46:42.039281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:14.872 [2024-12-05 19:46:42.039288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:14.872 [2024-12-05 19:46:42.039295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:14.872 [2024-12-05 19:46:42.039301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:14.872 [2024-12-05 19:46:42.039308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:14.872 [2024-12-05 19:46:42.039314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:14.872 [2024-12-05 19:46:42.039320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:14.872 [2024-12-05 19:46:42.039327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:14.872 [2024-12-05 19:46:42.039333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:14.872 [2024-12-05 19:46:42.039340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:14.872 [2024-12-05 19:46:42.039347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:14.872 [2024-12-05 19:46:42.039353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:14.872 [2024-12-05 19:46:42.039359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:14.872 [2024-12-05 19:46:42.039365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:14.872 [2024-12-05 19:46:42.039372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:14.872 [2024-12-05 19:46:42.039378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:14.872 [2024-12-05 19:46:42.039384] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:14.872 [2024-12-05 19:46:42.039391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:14.872 [2024-12-05 19:46:42.039398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:14.872 [2024-12-05 19:46:42.039405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:14.872 [2024-12-05 19:46:42.039412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:14.872 [2024-12-05 19:46:42.039418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:14.872 [2024-12-05 19:46:42.039425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:14.872 [2024-12-05 19:46:42.039432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:14.872 [2024-12-05 19:46:42.039438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:14.872 [2024-12-05 19:46:42.039444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:14.872 [2024-12-05 19:46:42.039452] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:14.872 [2024-12-05 19:46:42.039460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:14.872 [2024-12-05 19:46:42.039471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:14.872 [2024-12-05 19:46:42.039478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:14.872 [2024-12-05 19:46:42.039485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:14.872 [2024-12-05 19:46:42.039492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:14.872 [2024-12-05 19:46:42.039498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:14.872 [2024-12-05 19:46:42.039506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:14.872 [2024-12-05 19:46:42.039512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:14.872 [2024-12-05 19:46:42.039519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:14.872 [2024-12-05 19:46:42.039526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:14.872 [2024-12-05 19:46:42.039532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:14.872 [2024-12-05 19:46:42.039539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:14.872 [2024-12-05 19:46:42.039546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:14.872 [2024-12-05 19:46:42.039552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:14.872 [2024-12-05 19:46:42.039559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:14.872 [2024-12-05 19:46:42.039566] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:14.872 [2024-12-05 19:46:42.039573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:14.872 [2024-12-05 19:46:42.039581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:14.872 [2024-12-05 19:46:42.039588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:14.872 [2024-12-05 19:46:42.039595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:14.872 [2024-12-05 19:46:42.039601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:14.872 [2024-12-05 19:46:42.039608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.872 [2024-12-05 19:46:42.039615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:14.872 [2024-12-05 19:46:42.039622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:28:14.872 [2024-12-05 19:46:42.039630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.872 [2024-12-05 19:46:42.065904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.872 [2024-12-05 19:46:42.065938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:14.872 [2024-12-05 19:46:42.065951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.212 ms 00:28:14.872 [2024-12-05 19:46:42.065959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.872 [2024-12-05 19:46:42.066045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.872 [2024-12-05 19:46:42.066054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:14.872 [2024-12-05 19:46:42.066062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:14.872 [2024-12-05 19:46:42.066073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.872 [2024-12-05 19:46:42.110677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.872 [2024-12-05 19:46:42.110719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:14.872 [2024-12-05 19:46:42.110732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.547 ms 00:28:14.872 [2024-12-05 19:46:42.110740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.872 [2024-12-05 19:46:42.110785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.872 [2024-12-05 19:46:42.110799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:14.872 [2024-12-05 19:46:42.110808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:14.872 [2024-12-05 19:46:42.110815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.872 [2024-12-05 19:46:42.111189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.872 [2024-12-05 19:46:42.111206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:14.872 [2024-12-05 19:46:42.111216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:28:14.872 [2024-12-05 19:46:42.111223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.872 [2024-12-05 19:46:42.111354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.872 [2024-12-05 19:46:42.111369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:14.872 [2024-12-05 19:46:42.111378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:28:14.872 [2024-12-05 19:46:42.111386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.131 [2024-12-05 19:46:42.125051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.131 [2024-12-05 19:46:42.125084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:15.131 [2024-12-05 19:46:42.125094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.647 ms 00:28:15.131 [2024-12-05 19:46:42.125102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.131 [2024-12-05 19:46:42.137718] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:15.131 [2024-12-05 19:46:42.137753] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:15.131 [2024-12-05 19:46:42.137766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.131 [2024-12-05 19:46:42.137775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:15.132 [2024-12-05 19:46:42.137784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.567 ms 00:28:15.132 [2024-12-05 19:46:42.137791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.162477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.132 [2024-12-05 19:46:42.162515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:15.132 [2024-12-05 19:46:42.162527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.641 ms 00:28:15.132 [2024-12-05 19:46:42.162535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.174131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.132 [2024-12-05 19:46:42.174168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:15.132 [2024-12-05 19:46:42.174178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.562 ms 00:28:15.132 [2024-12-05 19:46:42.174185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.185719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.132 [2024-12-05 19:46:42.185763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:15.132 [2024-12-05 19:46:42.185773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.497 ms 00:28:15.132 [2024-12-05 19:46:42.185780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.186463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.132 [2024-12-05 19:46:42.186495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:15.132 [2024-12-05 19:46:42.186504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:28:15.132 [2024-12-05 19:46:42.186511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.243087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.132 [2024-12-05 19:46:42.243143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:15.132 [2024-12-05 19:46:42.243156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.557 ms 00:28:15.132 [2024-12-05 19:46:42.243164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.253572] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:15.132 [2024-12-05 19:46:42.256180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.132 [2024-12-05 19:46:42.256357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:15.132 [2024-12-05 19:46:42.256375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.969 ms 00:28:15.132 [2024-12-05 19:46:42.256385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.256494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.132 [2024-12-05 19:46:42.256507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:15.132 [2024-12-05 19:46:42.256519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:15.132 [2024-12-05 19:46:42.256526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.257139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.132 [2024-12-05 19:46:42.257165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:15.132 [2024-12-05 19:46:42.257175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:28:15.132 [2024-12-05 19:46:42.257182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.257205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.132 [2024-12-05 19:46:42.257217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:15.132 [2024-12-05 19:46:42.257229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:15.132 [2024-12-05 19:46:42.257245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.257283] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:15.132 [2024-12-05 19:46:42.257294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.132 [2024-12-05 19:46:42.257302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:15.132 [2024-12-05 19:46:42.257310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:15.132 [2024-12-05 19:46:42.257317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.281028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.132 [2024-12-05 19:46:42.281065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:15.132 [2024-12-05 19:46:42.281082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.690 ms 00:28:15.132 [2024-12-05 19:46:42.281091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.281161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.132 [2024-12-05 19:46:42.281171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:15.132 [2024-12-05 19:46:42.281179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:15.132 [2024-12-05 19:46:42.281186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.132 [2024-12-05 19:46:42.282232] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 267.399 ms, result 0 00:28:16.511  [2024-12-05T19:46:44.704Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-05T19:46:45.643Z] Copying: 37/1024 [MB] (19 MBps) [2024-12-05T19:46:46.580Z] Copying: 55/1024 [MB] (17 MBps) [2024-12-05T19:46:47.518Z] Copying: 72/1024 [MB] (17 MBps) [2024-12-05T19:46:48.549Z] Copying: 87/1024 [MB] (14 MBps) [2024-12-05T19:46:49.485Z] Copying: 100/1024 [MB] (13 MBps) [2024-12-05T19:46:50.866Z] Copying: 114/1024 [MB] (13 MBps) [2024-12-05T19:46:51.802Z] Copying: 126/1024 [MB] (12 MBps) [2024-12-05T19:46:52.751Z] Copying: 142/1024 [MB] (15 MBps) [2024-12-05T19:46:53.697Z] Copying: 155/1024 [MB] (13 MBps) [2024-12-05T19:46:54.643Z] Copying: 167/1024 [MB] (11 MBps) [2024-12-05T19:46:55.590Z] Copying: 180/1024 [MB] (13 MBps) [2024-12-05T19:46:56.531Z] Copying: 197/1024 [MB] (17 MBps) [2024-12-05T19:46:57.471Z] Copying: 211/1024 [MB] (13 MBps) [2024-12-05T19:46:58.854Z] Copying: 225/1024 [MB] (14 MBps) [2024-12-05T19:46:59.480Z] Copying: 235/1024 [MB] (10 MBps) [2024-12-05T19:47:00.863Z] Copying: 247/1024 [MB] (12 MBps) [2024-12-05T19:47:01.886Z] Copying: 259/1024 [MB] (11 MBps) [2024-12-05T19:47:02.459Z] Copying: 271/1024 [MB] (11 MBps) [2024-12-05T19:47:03.842Z] Copying: 281/1024 [MB] (10 MBps) [2024-12-05T19:47:04.781Z] Copying: 291/1024 [MB] (10 MBps) [2024-12-05T19:47:05.722Z] Copying: 301/1024 [MB] (10 MBps) [2024-12-05T19:47:06.666Z] Copying: 312/1024 [MB] (10 MBps) [2024-12-05T19:47:07.610Z] Copying: 322/1024 [MB] (10 MBps) [2024-12-05T19:47:08.553Z] Copying: 340848/1048576 [kB] (10216 kBps) [2024-12-05T19:47:09.494Z] Copying: 350340/1048576 [kB] (9492 kBps) [2024-12-05T19:47:10.892Z] Copying: 353/1024 [MB] (11 MBps) [2024-12-05T19:47:11.462Z] Copying: 363/1024 [MB] (10 MBps) [2024-12-05T19:47:12.849Z] Copying: 374/1024 [MB] (11 MBps) [2024-12-05T19:47:13.793Z] Copying: 385/1024 [MB] (10 MBps) [2024-12-05T19:47:14.823Z] Copying: 396/1024 [MB] (10 MBps) [2024-12-05T19:47:15.758Z] Copying: 408/1024 [MB] (12 MBps) [2024-12-05T19:47:16.697Z] Copying: 420/1024 [MB] (11 MBps) [2024-12-05T19:47:17.699Z] Copying: 431/1024 [MB] (11 MBps) [2024-12-05T19:47:18.640Z] Copying: 443/1024 [MB] (11 MBps) [2024-12-05T19:47:19.582Z] Copying: 455/1024 [MB] (12 MBps) [2024-12-05T19:47:20.524Z] Copying: 465/1024 [MB] (10 MBps) [2024-12-05T19:47:21.468Z] Copying: 487024/1048576 [kB] (9952 kBps) [2024-12-05T19:47:22.854Z] Copying: 496476/1048576 [kB] (9452 kBps) [2024-12-05T19:47:23.798Z] Copying: 495/1024 [MB] (10 MBps) [2024-12-05T19:47:24.740Z] Copying: 506/1024 [MB] (10 MBps) [2024-12-05T19:47:25.679Z] Copying: 516/1024 [MB] (10 MBps) [2024-12-05T19:47:26.622Z] Copying: 527/1024 [MB] (10 MBps) [2024-12-05T19:47:27.584Z] Copying: 549956/1048576 [kB] (10012 kBps) [2024-12-05T19:47:28.535Z] Copying: 547/1024 [MB] (10 MBps) [2024-12-05T19:47:29.480Z] Copying: 557/1024 [MB] (10 MBps) [2024-12-05T19:47:30.870Z] Copying: 568/1024 [MB] (10 MBps) [2024-12-05T19:47:31.808Z] Copying: 579/1024 [MB] (10 MBps) [2024-12-05T19:47:32.813Z] Copying: 590/1024 [MB] (11 MBps) [2024-12-05T19:47:33.760Z] Copying: 614408/1048576 [kB] (10088 kBps) [2024-12-05T19:47:34.696Z] Copying: 611/1024 [MB] (11 MBps) [2024-12-05T19:47:35.633Z] Copying: 622/1024 [MB] (10 MBps) [2024-12-05T19:47:36.571Z] Copying: 632/1024 [MB] (10 MBps) [2024-12-05T19:47:37.508Z] Copying: 648/1024 [MB] (15 MBps) [2024-12-05T19:47:38.881Z] Copying: 664/1024 [MB] (16 MBps) [2024-12-05T19:47:39.816Z] Copying: 679/1024 [MB] (15 MBps) [2024-12-05T19:47:40.751Z] Copying: 696/1024 [MB] (16 MBps) [2024-12-05T19:47:41.711Z] Copying: 712/1024 [MB] (15 MBps) [2024-12-05T19:47:42.655Z] Copying: 739464/1048576 [kB] (10096 kBps) [2024-12-05T19:47:43.603Z] Copying: 732/1024 [MB] (10 MBps) [2024-12-05T19:47:44.547Z] Copying: 759568/1048576 [kB] (9560 kBps) [2024-12-05T19:47:45.487Z] Copying: 751/1024 [MB] (10 MBps) [2024-12-05T19:47:46.868Z] Copying: 779724/1048576 [kB] (9808 kBps) [2024-12-05T19:47:47.823Z] Copying: 789488/1048576 [kB] (9764 kBps) [2024-12-05T19:47:48.767Z] Copying: 799244/1048576 [kB] (9756 kBps) [2024-12-05T19:47:49.712Z] Copying: 809004/1048576 [kB] (9760 kBps) [2024-12-05T19:47:50.656Z] Copying: 818436/1048576 [kB] (9432 kBps) [2024-12-05T19:47:51.598Z] Copying: 828312/1048576 [kB] (9876 kBps) [2024-12-05T19:47:52.541Z] Copying: 837772/1048576 [kB] (9460 kBps) [2024-12-05T19:47:53.483Z] Copying: 847684/1048576 [kB] (9912 kBps) [2024-12-05T19:47:54.477Z] Copying: 856824/1048576 [kB] (9140 kBps) [2024-12-05T19:47:55.858Z] Copying: 865828/1048576 [kB] (9004 kBps) [2024-12-05T19:47:56.801Z] Copying: 855/1024 [MB] (10 MBps) [2024-12-05T19:47:57.744Z] Copying: 885648/1048576 [kB] (9352 kBps) [2024-12-05T19:47:58.719Z] Copying: 895140/1048576 [kB] (9492 kBps) [2024-12-05T19:47:59.687Z] Copying: 904564/1048576 [kB] (9424 kBps) [2024-12-05T19:48:00.630Z] Copying: 914160/1048576 [kB] (9596 kBps) [2024-12-05T19:48:01.568Z] Copying: 924296/1048576 [kB] (10136 kBps) [2024-12-05T19:48:02.509Z] Copying: 921/1024 [MB] (18 MBps) [2024-12-05T19:48:03.464Z] Copying: 932/1024 [MB] (10 MBps) [2024-12-05T19:48:04.853Z] Copying: 943/1024 [MB] (10 MBps) [2024-12-05T19:48:05.795Z] Copying: 975352/1048576 [kB] (9572 kBps) [2024-12-05T19:48:06.738Z] Copying: 985520/1048576 [kB] (10168 kBps) [2024-12-05T19:48:07.692Z] Copying: 995344/1048576 [kB] (9824 kBps) [2024-12-05T19:48:08.635Z] Copying: 1005488/1048576 [kB] (10144 kBps) [2024-12-05T19:48:09.579Z] Copying: 1015216/1048576 [kB] (9728 kBps) [2024-12-05T19:48:10.522Z] Copying: 1025332/1048576 [kB] (10116 kBps) [2024-12-05T19:48:11.464Z] Copying: 1011/1024 [MB] (10 MBps) [2024-12-05T19:48:11.725Z] Copying: 1023/1024 [MB] (11 MBps) [2024-12-05T19:48:11.725Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-05 19:48:11.690461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.470 [2024-12-05 19:48:11.690524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:44.470 [2024-12-05 19:48:11.690539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:44.470 [2024-12-05 19:48:11.690549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.470 [2024-12-05 19:48:11.690571] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:44.470 [2024-12-05 19:48:11.693491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.470 [2024-12-05 19:48:11.693527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:44.470 [2024-12-05 19:48:11.693540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.903 ms 00:29:44.470 [2024-12-05 19:48:11.693550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.470 [2024-12-05 19:48:11.693840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.470 [2024-12-05 19:48:11.693854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:44.470 [2024-12-05 19:48:11.693866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:29:44.470 [2024-12-05 19:48:11.693877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.470 [2024-12-05 19:48:11.698340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.470 [2024-12-05 19:48:11.698357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:44.470 [2024-12-05 19:48:11.698373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.444 ms 00:29:44.470 [2024-12-05 19:48:11.698383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.470 [2024-12-05 19:48:11.706183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.470 [2024-12-05 19:48:11.706202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:44.470 [2024-12-05 19:48:11.706211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.781 ms 00:29:44.470 [2024-12-05 19:48:11.706218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.733 [2024-12-05 19:48:11.730680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.733 [2024-12-05 19:48:11.730719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:44.733 [2024-12-05 19:48:11.730730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.398 ms 00:29:44.733 [2024-12-05 19:48:11.730739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.733 [2024-12-05 19:48:11.744453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.733 [2024-12-05 19:48:11.744478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:44.733 [2024-12-05 19:48:11.744489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.680 ms 00:29:44.733 [2024-12-05 19:48:11.744503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.733 [2024-12-05 19:48:11.748990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.733 [2024-12-05 19:48:11.749012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:44.733 [2024-12-05 19:48:11.749021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.464 ms 00:29:44.733 [2024-12-05 19:48:11.749029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.733 [2024-12-05 19:48:11.774746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.733 [2024-12-05 19:48:11.774776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:44.733 [2024-12-05 19:48:11.774788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.702 ms 00:29:44.733 [2024-12-05 19:48:11.774796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.733 [2024-12-05 19:48:11.797755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.733 [2024-12-05 19:48:11.797780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:44.733 [2024-12-05 19:48:11.797791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.924 ms 00:29:44.733 [2024-12-05 19:48:11.797799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.733 [2024-12-05 19:48:11.820832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.733 [2024-12-05 19:48:11.820960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:44.733 [2024-12-05 19:48:11.820975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.999 ms 00:29:44.733 [2024-12-05 19:48:11.820983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.733 [2024-12-05 19:48:11.844106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.733 [2024-12-05 19:48:11.844215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:44.733 [2024-12-05 19:48:11.844231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.071 ms 00:29:44.733 [2024-12-05 19:48:11.844238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.733 [2024-12-05 19:48:11.844264] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:44.733 [2024-12-05 19:48:11.844283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:44.733 [2024-12-05 19:48:11.844294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:44.733 [2024-12-05 19:48:11.844302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:44.733 [2024-12-05 19:48:11.844310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:44.733 [2024-12-05 19:48:11.844317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:44.733 [2024-12-05 19:48:11.844325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:44.733 [2024-12-05 19:48:11.844332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:44.733 [2024-12-05 19:48:11.844340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:44.733 [2024-12-05 19:48:11.844348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:44.733 [2024-12-05 19:48:11.844355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:44.733 [2024-12-05 19:48:11.844363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.844996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:44.734 [2024-12-05 19:48:11.845192] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:44.735 [2024-12-05 19:48:11.845199] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef06c99b-746b-450b-a28e-d2a7acff1c63 00:29:44.735 [2024-12-05 19:48:11.845207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:44.735 [2024-12-05 19:48:11.845214] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:44.735 [2024-12-05 19:48:11.845221] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:44.735 [2024-12-05 19:48:11.845228] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:44.735 [2024-12-05 19:48:11.845241] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:44.735 [2024-12-05 19:48:11.845249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:44.735 [2024-12-05 19:48:11.845255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:44.735 [2024-12-05 19:48:11.845262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:44.735 [2024-12-05 19:48:11.845268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:44.735 [2024-12-05 19:48:11.845275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.735 [2024-12-05 19:48:11.845284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:44.735 [2024-12-05 19:48:11.845294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:29:44.735 [2024-12-05 19:48:11.845301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.735 [2024-12-05 19:48:11.857689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.735 [2024-12-05 19:48:11.857710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:44.735 [2024-12-05 19:48:11.857720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.372 ms 00:29:44.735 [2024-12-05 19:48:11.857729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.735 [2024-12-05 19:48:11.858071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.735 [2024-12-05 19:48:11.858080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:44.735 [2024-12-05 19:48:11.858088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:29:44.735 [2024-12-05 19:48:11.858095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.735 [2024-12-05 19:48:11.890681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.735 [2024-12-05 19:48:11.890706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:44.735 [2024-12-05 19:48:11.890715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.735 [2024-12-05 19:48:11.890724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.735 [2024-12-05 19:48:11.890775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.735 [2024-12-05 19:48:11.890783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:44.735 [2024-12-05 19:48:11.890791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.735 [2024-12-05 19:48:11.890798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.735 [2024-12-05 19:48:11.890851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.735 [2024-12-05 19:48:11.890860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:44.735 [2024-12-05 19:48:11.890868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.735 [2024-12-05 19:48:11.890875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.735 [2024-12-05 19:48:11.890890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.735 [2024-12-05 19:48:11.890901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:44.735 [2024-12-05 19:48:11.890908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.735 [2024-12-05 19:48:11.890915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.735 [2024-12-05 19:48:11.968328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.735 [2024-12-05 19:48:11.968362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:44.735 [2024-12-05 19:48:11.968373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.735 [2024-12-05 19:48:11.968381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.997 [2024-12-05 19:48:12.031824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.997 [2024-12-05 19:48:12.031966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:44.997 [2024-12-05 19:48:12.031983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.997 [2024-12-05 19:48:12.031991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.997 [2024-12-05 19:48:12.032062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.997 [2024-12-05 19:48:12.032073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:44.997 [2024-12-05 19:48:12.032080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.997 [2024-12-05 19:48:12.032087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.997 [2024-12-05 19:48:12.032118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.997 [2024-12-05 19:48:12.032126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:44.997 [2024-12-05 19:48:12.032139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.997 [2024-12-05 19:48:12.032146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.997 [2024-12-05 19:48:12.032237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.997 [2024-12-05 19:48:12.032246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:44.997 [2024-12-05 19:48:12.032254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.997 [2024-12-05 19:48:12.032262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.997 [2024-12-05 19:48:12.032289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.997 [2024-12-05 19:48:12.032298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:44.997 [2024-12-05 19:48:12.032307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.997 [2024-12-05 19:48:12.032318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.997 [2024-12-05 19:48:12.032350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.997 [2024-12-05 19:48:12.032359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:44.997 [2024-12-05 19:48:12.032367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.997 [2024-12-05 19:48:12.032374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.997 [2024-12-05 19:48:12.032413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.997 [2024-12-05 19:48:12.032422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:44.997 [2024-12-05 19:48:12.032432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.997 [2024-12-05 19:48:12.032440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.997 [2024-12-05 19:48:12.032546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.060 ms, result 0 00:29:45.570 00:29:45.570 00:29:45.570 19:48:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:48.116 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:48.116 19:48:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:48.116 19:48:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:48.116 19:48:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:48.116 19:48:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:48.116 19:48:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:48.116 19:48:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:48.116 19:48:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:48.116 19:48:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79366 00:29:48.116 Process with pid 79366 is not found 00:29:48.116 19:48:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79366 ']' 00:29:48.116 19:48:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 79366 00:29:48.116 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79366) - No such process 00:29:48.116 19:48:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 79366 is not found' 00:29:48.116 19:48:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:48.377 19:48:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:48.377 Remove shared memory files 00:29:48.377 19:48:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:48.377 19:48:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:48.377 19:48:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:48.377 19:48:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:48.377 19:48:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:48.377 19:48:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:48.377 ************************************ 00:29:48.377 END TEST ftl_dirty_shutdown 00:29:48.377 ************************************ 00:29:48.377 00:29:48.377 real 3m57.382s 00:29:48.377 user 4m14.760s 00:29:48.377 sys 0m25.172s 00:29:48.377 19:48:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.377 19:48:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:48.377 19:48:15 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:48.377 19:48:15 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:48.377 19:48:15 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.377 19:48:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:48.377 ************************************ 00:29:48.377 START TEST ftl_upgrade_shutdown 00:29:48.377 ************************************ 00:29:48.377 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:48.377 * Looking for test storage... 00:29:48.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:48.377 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:48.377 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:48.377 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:48.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.638 --rc genhtml_branch_coverage=1 00:29:48.638 --rc genhtml_function_coverage=1 00:29:48.638 --rc genhtml_legend=1 00:29:48.638 --rc geninfo_all_blocks=1 00:29:48.638 --rc geninfo_unexecuted_blocks=1 00:29:48.638 00:29:48.638 ' 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:48.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.638 --rc genhtml_branch_coverage=1 00:29:48.638 --rc genhtml_function_coverage=1 00:29:48.638 --rc genhtml_legend=1 00:29:48.638 --rc geninfo_all_blocks=1 00:29:48.638 --rc geninfo_unexecuted_blocks=1 00:29:48.638 00:29:48.638 ' 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:48.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.638 --rc genhtml_branch_coverage=1 00:29:48.638 --rc genhtml_function_coverage=1 00:29:48.638 --rc genhtml_legend=1 00:29:48.638 --rc geninfo_all_blocks=1 00:29:48.638 --rc geninfo_unexecuted_blocks=1 00:29:48.638 00:29:48.638 ' 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:48.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.638 --rc genhtml_branch_coverage=1 00:29:48.638 --rc genhtml_function_coverage=1 00:29:48.638 --rc genhtml_legend=1 00:29:48.638 --rc geninfo_all_blocks=1 00:29:48.638 --rc geninfo_unexecuted_blocks=1 00:29:48.638 00:29:48.638 ' 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:48.638 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81924 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81924 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81924 ']' 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.639 19:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:48.639 [2024-12-05 19:48:15.788979] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:29:48.639 [2024-12-05 19:48:15.789336] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81924 ] 00:29:48.899 [2024-12-05 19:48:15.949538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.899 [2024-12-05 19:48:16.086518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:49.846 19:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:50.106 19:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:50.106 19:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:50.106 19:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:50.106 19:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:50.106 19:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:50.106 19:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:50.106 19:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:50.106 19:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:50.106 19:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:50.106 { 00:29:50.106 "name": "basen1", 00:29:50.106 "aliases": [ 00:29:50.106 "9d1318af-52e7-4ccb-a761-c3179f7cf415" 00:29:50.106 ], 00:29:50.106 "product_name": "NVMe disk", 00:29:50.106 "block_size": 4096, 00:29:50.106 "num_blocks": 1310720, 00:29:50.106 "uuid": "9d1318af-52e7-4ccb-a761-c3179f7cf415", 00:29:50.106 "numa_id": -1, 00:29:50.106 "assigned_rate_limits": { 00:29:50.106 "rw_ios_per_sec": 0, 00:29:50.106 "rw_mbytes_per_sec": 0, 00:29:50.106 "r_mbytes_per_sec": 0, 00:29:50.106 "w_mbytes_per_sec": 0 00:29:50.106 }, 00:29:50.106 "claimed": true, 00:29:50.106 "claim_type": "read_many_write_one", 00:29:50.106 "zoned": false, 00:29:50.106 "supported_io_types": { 00:29:50.106 "read": true, 00:29:50.106 "write": true, 00:29:50.106 "unmap": true, 00:29:50.106 "flush": true, 00:29:50.106 "reset": true, 00:29:50.106 "nvme_admin": true, 00:29:50.106 "nvme_io": true, 00:29:50.106 "nvme_io_md": false, 00:29:50.106 "write_zeroes": true, 00:29:50.106 "zcopy": false, 00:29:50.106 "get_zone_info": false, 00:29:50.106 "zone_management": false, 00:29:50.106 "zone_append": false, 00:29:50.106 "compare": true, 00:29:50.106 "compare_and_write": false, 00:29:50.106 "abort": true, 00:29:50.106 "seek_hole": false, 00:29:50.106 "seek_data": false, 00:29:50.106 "copy": true, 00:29:50.106 "nvme_iov_md": false 00:29:50.106 }, 00:29:50.106 "driver_specific": { 00:29:50.106 "nvme": [ 00:29:50.106 { 00:29:50.106 "pci_address": "0000:00:11.0", 00:29:50.106 "trid": { 00:29:50.106 "trtype": "PCIe", 00:29:50.106 "traddr": "0000:00:11.0" 00:29:50.106 }, 00:29:50.106 "ctrlr_data": { 00:29:50.106 "cntlid": 0, 00:29:50.106 "vendor_id": "0x1b36", 00:29:50.106 "model_number": "QEMU NVMe Ctrl", 00:29:50.106 "serial_number": "12341", 00:29:50.106 "firmware_revision": "8.0.0", 00:29:50.106 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:50.106 "oacs": { 00:29:50.106 "security": 0, 00:29:50.106 "format": 1, 00:29:50.106 "firmware": 0, 00:29:50.106 "ns_manage": 1 00:29:50.106 }, 00:29:50.106 "multi_ctrlr": false, 00:29:50.106 "ana_reporting": false 00:29:50.106 }, 00:29:50.106 "vs": { 00:29:50.106 "nvme_version": "1.4" 00:29:50.106 }, 00:29:50.106 "ns_data": { 00:29:50.106 "id": 1, 00:29:50.106 "can_share": false 00:29:50.106 } 00:29:50.106 } 00:29:50.106 ], 00:29:50.106 "mp_policy": "active_passive" 00:29:50.106 } 00:29:50.106 } 00:29:50.106 ]' 00:29:50.106 19:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:50.366 19:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:50.366 19:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:50.366 19:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:50.366 19:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:50.366 19:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:50.366 19:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:50.366 19:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:50.366 19:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:50.366 19:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:50.366 19:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:50.627 19:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=8c5ee5d8-6b07-4c7d-8d36-b819cbb8847c 00:29:50.627 19:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:50.627 19:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c5ee5d8-6b07-4c7d-8d36-b819cbb8847c 00:29:50.887 19:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:50.887 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=c6dd6603-4430-494a-8aec-16330aa992fc 00:29:50.887 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u c6dd6603-4430-494a-8aec-16330aa992fc 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=34e2b996-ba7e-4865-a200-53668ecabb1c 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 34e2b996-ba7e-4865-a200-53668ecabb1c ]] 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 34e2b996-ba7e-4865-a200-53668ecabb1c 5120 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=34e2b996-ba7e-4865-a200-53668ecabb1c 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 34e2b996-ba7e-4865-a200-53668ecabb1c 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=34e2b996-ba7e-4865-a200-53668ecabb1c 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:51.147 19:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 34e2b996-ba7e-4865-a200-53668ecabb1c 00:29:51.407 19:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:51.407 { 00:29:51.407 "name": "34e2b996-ba7e-4865-a200-53668ecabb1c", 00:29:51.407 "aliases": [ 00:29:51.407 "lvs/basen1p0" 00:29:51.407 ], 00:29:51.407 "product_name": "Logical Volume", 00:29:51.407 "block_size": 4096, 00:29:51.407 "num_blocks": 5242880, 00:29:51.407 "uuid": "34e2b996-ba7e-4865-a200-53668ecabb1c", 00:29:51.407 "assigned_rate_limits": { 00:29:51.407 "rw_ios_per_sec": 0, 00:29:51.407 "rw_mbytes_per_sec": 0, 00:29:51.407 "r_mbytes_per_sec": 0, 00:29:51.407 "w_mbytes_per_sec": 0 00:29:51.407 }, 00:29:51.407 "claimed": false, 00:29:51.407 "zoned": false, 00:29:51.407 "supported_io_types": { 00:29:51.407 "read": true, 00:29:51.407 "write": true, 00:29:51.407 "unmap": true, 00:29:51.407 "flush": false, 00:29:51.407 "reset": true, 00:29:51.407 "nvme_admin": false, 00:29:51.407 "nvme_io": false, 00:29:51.407 "nvme_io_md": false, 00:29:51.407 "write_zeroes": true, 00:29:51.407 "zcopy": false, 00:29:51.407 "get_zone_info": false, 00:29:51.407 "zone_management": false, 00:29:51.407 "zone_append": false, 00:29:51.407 "compare": false, 00:29:51.407 "compare_and_write": false, 00:29:51.407 "abort": false, 00:29:51.407 "seek_hole": true, 00:29:51.407 "seek_data": true, 00:29:51.407 "copy": false, 00:29:51.407 "nvme_iov_md": false 00:29:51.407 }, 00:29:51.407 "driver_specific": { 00:29:51.407 "lvol": { 00:29:51.407 "lvol_store_uuid": "c6dd6603-4430-494a-8aec-16330aa992fc", 00:29:51.407 "base_bdev": "basen1", 00:29:51.407 "thin_provision": true, 00:29:51.407 "num_allocated_clusters": 0, 00:29:51.407 "snapshot": false, 00:29:51.407 "clone": false, 00:29:51.407 "esnap_clone": false 00:29:51.407 } 00:29:51.407 } 00:29:51.407 } 00:29:51.407 ]' 00:29:51.407 19:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:51.407 19:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:51.407 19:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:51.407 19:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:51.407 19:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:51.407 19:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:51.407 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:51.407 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:51.407 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:51.667 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:51.667 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:51.667 19:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:51.928 19:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:51.928 19:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:51.929 19:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 34e2b996-ba7e-4865-a200-53668ecabb1c -c cachen1p0 --l2p_dram_limit 2 00:29:52.191 [2024-12-05 19:48:19.301408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.191 [2024-12-05 19:48:19.301480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:52.191 [2024-12-05 19:48:19.301499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:52.191 [2024-12-05 19:48:19.301508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.191 [2024-12-05 19:48:19.301583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.191 [2024-12-05 19:48:19.301594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:52.191 [2024-12-05 19:48:19.301605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:52.191 [2024-12-05 19:48:19.301614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.191 [2024-12-05 19:48:19.301638] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:52.191 [2024-12-05 19:48:19.302522] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:52.191 [2024-12-05 19:48:19.302557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.191 [2024-12-05 19:48:19.302566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:52.191 [2024-12-05 19:48:19.302577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.920 ms 00:29:52.191 [2024-12-05 19:48:19.302585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.191 [2024-12-05 19:48:19.302667] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 00439dda-d519-4ec7-88a6-b1dfade02cf3 00:29:52.191 [2024-12-05 19:48:19.304401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.191 [2024-12-05 19:48:19.304451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:52.191 [2024-12-05 19:48:19.304463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:29:52.191 [2024-12-05 19:48:19.304473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.191 [2024-12-05 19:48:19.313413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.191 [2024-12-05 19:48:19.313466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:52.191 [2024-12-05 19:48:19.313478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.885 ms 00:29:52.191 [2024-12-05 19:48:19.313488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.191 [2024-12-05 19:48:19.313536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.191 [2024-12-05 19:48:19.313548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:52.191 [2024-12-05 19:48:19.313556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:52.191 [2024-12-05 19:48:19.313569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.191 [2024-12-05 19:48:19.313627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.191 [2024-12-05 19:48:19.313641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:52.191 [2024-12-05 19:48:19.313651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:52.191 [2024-12-05 19:48:19.313662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.191 [2024-12-05 19:48:19.313718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:52.191 [2024-12-05 19:48:19.318145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.191 [2024-12-05 19:48:19.318185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:52.191 [2024-12-05 19:48:19.318201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.433 ms 00:29:52.191 [2024-12-05 19:48:19.318208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.191 [2024-12-05 19:48:19.318246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.191 [2024-12-05 19:48:19.318255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:52.191 [2024-12-05 19:48:19.318266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:52.191 [2024-12-05 19:48:19.318274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.191 [2024-12-05 19:48:19.318320] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:52.191 [2024-12-05 19:48:19.318474] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:52.191 [2024-12-05 19:48:19.318492] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:52.191 [2024-12-05 19:48:19.318505] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:52.191 [2024-12-05 19:48:19.318518] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:52.191 [2024-12-05 19:48:19.318527] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:52.191 [2024-12-05 19:48:19.318538] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:52.191 [2024-12-05 19:48:19.318546] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:52.191 [2024-12-05 19:48:19.318559] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:52.191 [2024-12-05 19:48:19.318566] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:52.191 [2024-12-05 19:48:19.318577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.191 [2024-12-05 19:48:19.318584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:52.191 [2024-12-05 19:48:19.318596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:29:52.191 [2024-12-05 19:48:19.318603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.191 [2024-12-05 19:48:19.318712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.191 [2024-12-05 19:48:19.318730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:52.191 [2024-12-05 19:48:19.318740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.092 ms 00:29:52.191 [2024-12-05 19:48:19.318748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.191 [2024-12-05 19:48:19.318856] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:52.191 [2024-12-05 19:48:19.318867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:52.191 [2024-12-05 19:48:19.318878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:52.191 [2024-12-05 19:48:19.318886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.191 [2024-12-05 19:48:19.318896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:52.191 [2024-12-05 19:48:19.318902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:52.191 [2024-12-05 19:48:19.318911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:52.191 [2024-12-05 19:48:19.318917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:52.191 [2024-12-05 19:48:19.318926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:52.191 [2024-12-05 19:48:19.318933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.191 [2024-12-05 19:48:19.318943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:52.191 [2024-12-05 19:48:19.318950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:52.191 [2024-12-05 19:48:19.318959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.191 [2024-12-05 19:48:19.318966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:52.191 [2024-12-05 19:48:19.318975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:52.191 [2024-12-05 19:48:19.318981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.191 [2024-12-05 19:48:19.318993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:52.191 [2024-12-05 19:48:19.318999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:52.191 [2024-12-05 19:48:19.319008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.191 [2024-12-05 19:48:19.319015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:52.191 [2024-12-05 19:48:19.319026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:52.191 [2024-12-05 19:48:19.319034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:52.191 [2024-12-05 19:48:19.319043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:52.191 [2024-12-05 19:48:19.319050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:52.191 [2024-12-05 19:48:19.319058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:52.191 [2024-12-05 19:48:19.319065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:52.191 [2024-12-05 19:48:19.319074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:52.191 [2024-12-05 19:48:19.319080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:52.191 [2024-12-05 19:48:19.319088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:52.191 [2024-12-05 19:48:19.319095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:52.191 [2024-12-05 19:48:19.319103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:52.192 [2024-12-05 19:48:19.319109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:52.192 [2024-12-05 19:48:19.319121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:52.192 [2024-12-05 19:48:19.319127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.192 [2024-12-05 19:48:19.319136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:52.192 [2024-12-05 19:48:19.319143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:52.192 [2024-12-05 19:48:19.319152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.192 [2024-12-05 19:48:19.319159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:52.192 [2024-12-05 19:48:19.319168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:52.192 [2024-12-05 19:48:19.319174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.192 [2024-12-05 19:48:19.319183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:52.192 [2024-12-05 19:48:19.319189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:52.192 [2024-12-05 19:48:19.319197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.192 [2024-12-05 19:48:19.319203] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:52.192 [2024-12-05 19:48:19.319213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:52.192 [2024-12-05 19:48:19.319220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:52.192 [2024-12-05 19:48:19.319229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.192 [2024-12-05 19:48:19.319237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:52.192 [2024-12-05 19:48:19.319248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:52.192 [2024-12-05 19:48:19.319254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:52.192 [2024-12-05 19:48:19.319263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:52.192 [2024-12-05 19:48:19.319269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:52.192 [2024-12-05 19:48:19.319279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:52.192 [2024-12-05 19:48:19.319289] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:52.192 [2024-12-05 19:48:19.319303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:52.192 [2024-12-05 19:48:19.319312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:52.192 [2024-12-05 19:48:19.319321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:52.192 [2024-12-05 19:48:19.319328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:52.192 [2024-12-05 19:48:19.319338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:52.192 [2024-12-05 19:48:19.319345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:52.192 [2024-12-05 19:48:19.319354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:52.192 [2024-12-05 19:48:19.319361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:52.192 [2024-12-05 19:48:19.319373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:52.192 [2024-12-05 19:48:19.319380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:52.192 [2024-12-05 19:48:19.319392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:52.192 [2024-12-05 19:48:19.319399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:52.192 [2024-12-05 19:48:19.319408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:52.192 [2024-12-05 19:48:19.319415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:52.192 [2024-12-05 19:48:19.319424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:52.192 [2024-12-05 19:48:19.319431] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:52.192 [2024-12-05 19:48:19.319441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:52.192 [2024-12-05 19:48:19.319450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:52.192 [2024-12-05 19:48:19.319459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:52.192 [2024-12-05 19:48:19.319466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:52.192 [2024-12-05 19:48:19.319476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:52.192 [2024-12-05 19:48:19.319483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.192 [2024-12-05 19:48:19.319493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:52.192 [2024-12-05 19:48:19.319501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.698 ms 00:29:52.192 [2024-12-05 19:48:19.319511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.192 [2024-12-05 19:48:19.319549] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:52.192 [2024-12-05 19:48:19.319563] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:58.783 [2024-12-05 19:48:25.075263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.075572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:58.783 [2024-12-05 19:48:25.075663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5755.691 ms 00:29:58.783 [2024-12-05 19:48:25.075709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.114460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.114782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:58.783 [2024-12-05 19:48:25.115043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.131 ms 00:29:58.783 [2024-12-05 19:48:25.115096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.115333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.115369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:58.783 [2024-12-05 19:48:25.115441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:58.783 [2024-12-05 19:48:25.115479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.151259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.151448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:58.783 [2024-12-05 19:48:25.151518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.697 ms 00:29:58.783 [2024-12-05 19:48:25.151548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.151607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.151641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:58.783 [2024-12-05 19:48:25.151665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:58.783 [2024-12-05 19:48:25.151710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.152294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.152439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:58.783 [2024-12-05 19:48:25.152635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.508 ms 00:29:58.783 [2024-12-05 19:48:25.152728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.152795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.152821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:58.783 [2024-12-05 19:48:25.152914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:58.783 [2024-12-05 19:48:25.152943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.170938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.171127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:58.783 [2024-12-05 19:48:25.171192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.960 ms 00:29:58.783 [2024-12-05 19:48:25.171219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.204511] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:58.783 [2024-12-05 19:48:25.206050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.206203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:58.783 [2024-12-05 19:48:25.206274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.712 ms 00:29:58.783 [2024-12-05 19:48:25.206441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.234877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.234929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:58.783 [2024-12-05 19:48:25.234950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.345 ms 00:29:58.783 [2024-12-05 19:48:25.234961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.235075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.235090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:58.783 [2024-12-05 19:48:25.235105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:29:58.783 [2024-12-05 19:48:25.235113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.260473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.260683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:58.783 [2024-12-05 19:48:25.260712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.298 ms 00:29:58.783 [2024-12-05 19:48:25.260722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.284780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.284832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:58.783 [2024-12-05 19:48:25.284849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.002 ms 00:29:58.783 [2024-12-05 19:48:25.284857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.285466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.285479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:58.783 [2024-12-05 19:48:25.285492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.559 ms 00:29:58.783 [2024-12-05 19:48:25.285503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.367324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.367527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:58.783 [2024-12-05 19:48:25.367561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 81.769 ms 00:29:58.783 [2024-12-05 19:48:25.367571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.394764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.394817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:58.783 [2024-12-05 19:48:25.394837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.107 ms 00:29:58.783 [2024-12-05 19:48:25.394846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.783 [2024-12-05 19:48:25.420661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.783 [2024-12-05 19:48:25.420719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:58.783 [2024-12-05 19:48:25.420736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.777 ms 00:29:58.783 [2024-12-05 19:48:25.420744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.784 [2024-12-05 19:48:25.446915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.784 [2024-12-05 19:48:25.446966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:58.784 [2024-12-05 19:48:25.446983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.137 ms 00:29:58.784 [2024-12-05 19:48:25.446991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.784 [2024-12-05 19:48:25.447027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.784 [2024-12-05 19:48:25.447037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:58.784 [2024-12-05 19:48:25.447052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:58.784 [2024-12-05 19:48:25.447060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.784 [2024-12-05 19:48:25.447156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.784 [2024-12-05 19:48:25.447170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:58.784 [2024-12-05 19:48:25.447181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:58.784 [2024-12-05 19:48:25.447189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.784 [2024-12-05 19:48:25.448390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 6146.483 ms, result 0 00:29:58.784 { 00:29:58.784 "name": "ftl", 00:29:58.784 "uuid": "00439dda-d519-4ec7-88a6-b1dfade02cf3" 00:29:58.784 } 00:29:58.784 19:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:58.784 [2024-12-05 19:48:25.663742] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.784 19:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:58.784 19:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:59.044 [2024-12-05 19:48:26.099992] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:59.045 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:59.306 [2024-12-05 19:48:26.325485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:59.306 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:59.569 Fill FTL, iteration 1 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=82070 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 82070 /var/tmp/spdk.tgt.sock 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82070 ']' 00:29:59.569 19:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:59.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:59.570 19:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.570 19:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:59.570 19:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.570 19:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:59.570 19:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:59.570 [2024-12-05 19:48:26.786588] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:29:59.570 [2024-12-05 19:48:26.787615] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82070 ] 00:29:59.840 [2024-12-05 19:48:26.960469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.102 [2024-12-05 19:48:27.095321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.672 19:48:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.672 19:48:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:00.672 19:48:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:00.932 ftln1 00:30:00.932 19:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:00.932 19:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:01.192 19:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:01.192 19:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 82070 00:30:01.192 19:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82070 ']' 00:30:01.192 19:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82070 00:30:01.192 19:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:01.192 19:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.192 19:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82070 00:30:01.192 killing process with pid 82070 00:30:01.192 19:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:01.192 19:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:01.192 19:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82070' 00:30:01.192 19:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82070 00:30:01.192 19:48:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82070 00:30:03.109 19:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:03.109 19:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:03.109 [2024-12-05 19:48:29.997429] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:30:03.109 [2024-12-05 19:48:29.997841] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82116 ] 00:30:03.109 [2024-12-05 19:48:30.163767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.109 [2024-12-05 19:48:30.295729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.515  [2024-12-05T19:48:32.742Z] Copying: 173/1024 [MB] (173 MBps) [2024-12-05T19:48:34.131Z] Copying: 352/1024 [MB] (179 MBps) [2024-12-05T19:48:34.704Z] Copying: 524/1024 [MB] (172 MBps) [2024-12-05T19:48:36.110Z] Copying: 705/1024 [MB] (181 MBps) [2024-12-05T19:48:37.063Z] Copying: 864/1024 [MB] (159 MBps) [2024-12-05T19:48:37.063Z] Copying: 1009/1024 [MB] (145 MBps) [2024-12-05T19:48:37.634Z] Copying: 1024/1024 [MB] (average 167 MBps) 00:30:10.379 00:30:10.379 Calculate MD5 checksum, iteration 1 00:30:10.379 19:48:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:10.379 19:48:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:10.379 19:48:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:10.379 19:48:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:10.379 19:48:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:10.379 19:48:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:10.379 19:48:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:10.379 19:48:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:10.637 [2024-12-05 19:48:37.686039] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:30:10.637 [2024-12-05 19:48:37.686446] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82196 ] 00:30:10.637 [2024-12-05 19:48:37.846976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.895 [2024-12-05 19:48:37.941282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.265  [2024-12-05T19:48:39.779Z] Copying: 680/1024 [MB] (680 MBps) [2024-12-05T19:48:40.711Z] Copying: 1024/1024 [MB] (average 684 MBps) 00:30:13.456 00:30:13.456 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:13.457 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:15.355 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:15.355 Fill FTL, iteration 2 00:30:15.355 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7b7c500f4a7c07751e89f45c4a373719 00:30:15.355 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:15.355 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:15.355 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:15.355 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:15.355 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:15.355 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:15.355 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:15.355 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:15.355 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:15.355 [2024-12-05 19:48:42.593964] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:30:15.355 [2024-12-05 19:48:42.594078] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82253 ] 00:30:15.620 [2024-12-05 19:48:42.752412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.620 [2024-12-05 19:48:42.850309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.992  [2024-12-05T19:48:45.233Z] Copying: 211/1024 [MB] (211 MBps) [2024-12-05T19:48:46.608Z] Copying: 351/1024 [MB] (140 MBps) [2024-12-05T19:48:47.543Z] Copying: 604/1024 [MB] (253 MBps) [2024-12-05T19:48:48.107Z] Copying: 870/1024 [MB] (266 MBps) [2024-12-05T19:48:48.673Z] Copying: 1024/1024 [MB] (average 221 MBps) 00:30:21.418 00:30:21.418 Calculate MD5 checksum, iteration 2 00:30:21.418 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:21.418 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:21.418 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:21.418 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:21.418 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:21.418 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:21.418 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:21.418 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:21.418 [2024-12-05 19:48:48.510075] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:30:21.418 [2024-12-05 19:48:48.510187] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82317 ] 00:30:21.418 [2024-12-05 19:48:48.664970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.676 [2024-12-05 19:48:48.745407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.051  [2024-12-05T19:48:50.871Z] Copying: 670/1024 [MB] (670 MBps) [2024-12-05T19:48:51.807Z] Copying: 1024/1024 [MB] (average 672 MBps) 00:30:24.552 00:30:24.552 19:48:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:24.552 19:48:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:27.080 19:48:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:27.080 19:48:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=66ebc005a81859352e18c5c481c03108 00:30:27.080 19:48:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:27.080 19:48:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:27.080 19:48:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:27.080 [2024-12-05 19:48:54.058396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.080 [2024-12-05 19:48:54.058439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:27.080 [2024-12-05 19:48:54.058451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:27.080 [2024-12-05 19:48:54.058457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.080 [2024-12-05 19:48:54.058476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.080 [2024-12-05 19:48:54.058486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:27.080 [2024-12-05 19:48:54.058493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:27.080 [2024-12-05 19:48:54.058499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.080 [2024-12-05 19:48:54.058515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.080 [2024-12-05 19:48:54.058521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:27.080 [2024-12-05 19:48:54.058527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:27.080 [2024-12-05 19:48:54.058534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.080 [2024-12-05 19:48:54.058584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.177 ms, result 0 00:30:27.080 true 00:30:27.080 19:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:27.080 { 00:30:27.080 "name": "ftl", 00:30:27.080 "properties": [ 00:30:27.080 { 00:30:27.080 "name": "superblock_version", 00:30:27.080 "value": 5, 00:30:27.080 "read-only": true 00:30:27.080 }, 00:30:27.080 { 00:30:27.080 "name": "base_device", 00:30:27.080 "bands": [ 00:30:27.080 { 00:30:27.080 "id": 0, 00:30:27.080 "state": "FREE", 00:30:27.080 "validity": 0.0 00:30:27.080 }, 00:30:27.080 { 00:30:27.081 "id": 1, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 2, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 3, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 4, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 5, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 6, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 7, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 8, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 9, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 10, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 11, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 12, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 13, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 14, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 15, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 16, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 17, 00:30:27.081 "state": "FREE", 00:30:27.081 "validity": 0.0 00:30:27.081 } 00:30:27.081 ], 00:30:27.081 "read-only": true 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "name": "cache_device", 00:30:27.081 "type": "bdev", 00:30:27.081 "chunks": [ 00:30:27.081 { 00:30:27.081 "id": 0, 00:30:27.081 "state": "INACTIVE", 00:30:27.081 "utilization": 0.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 1, 00:30:27.081 "state": "CLOSED", 00:30:27.081 "utilization": 1.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 2, 00:30:27.081 "state": "CLOSED", 00:30:27.081 "utilization": 1.0 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 3, 00:30:27.081 "state": "OPEN", 00:30:27.081 "utilization": 0.001953125 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "id": 4, 00:30:27.081 "state": "OPEN", 00:30:27.081 "utilization": 0.0 00:30:27.081 } 00:30:27.081 ], 00:30:27.081 "read-only": true 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "name": "verbose_mode", 00:30:27.081 "value": true, 00:30:27.081 "unit": "", 00:30:27.081 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:27.081 }, 00:30:27.081 { 00:30:27.081 "name": "prep_upgrade_on_shutdown", 00:30:27.081 "value": false, 00:30:27.081 "unit": "", 00:30:27.081 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:27.081 } 00:30:27.081 ] 00:30:27.081 } 00:30:27.081 19:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:27.339 [2024-12-05 19:48:54.430668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.339 [2024-12-05 19:48:54.430818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:27.339 [2024-12-05 19:48:54.430832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:27.339 [2024-12-05 19:48:54.430839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.339 [2024-12-05 19:48:54.430862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.339 [2024-12-05 19:48:54.430869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:27.339 [2024-12-05 19:48:54.430875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:27.339 [2024-12-05 19:48:54.430881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.339 [2024-12-05 19:48:54.430895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.339 [2024-12-05 19:48:54.430901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:27.339 [2024-12-05 19:48:54.430907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:27.339 [2024-12-05 19:48:54.430912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.339 [2024-12-05 19:48:54.430960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.279 ms, result 0 00:30:27.339 true 00:30:27.339 19:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:27.339 19:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:27.339 19:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:27.597 19:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:27.597 19:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:27.597 19:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:27.597 [2024-12-05 19:48:54.806997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.597 [2024-12-05 19:48:54.807038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:27.597 [2024-12-05 19:48:54.807049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:27.597 [2024-12-05 19:48:54.807055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.597 [2024-12-05 19:48:54.807072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.597 [2024-12-05 19:48:54.807079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:27.597 [2024-12-05 19:48:54.807085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:27.597 [2024-12-05 19:48:54.807090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.597 [2024-12-05 19:48:54.807105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.597 [2024-12-05 19:48:54.807111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:27.597 [2024-12-05 19:48:54.807116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:27.597 [2024-12-05 19:48:54.807122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.597 [2024-12-05 19:48:54.807165] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.160 ms, result 0 00:30:27.597 true 00:30:27.597 19:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:27.856 { 00:30:27.856 "name": "ftl", 00:30:27.856 "properties": [ 00:30:27.856 { 00:30:27.856 "name": "superblock_version", 00:30:27.856 "value": 5, 00:30:27.856 "read-only": true 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "name": "base_device", 00:30:27.856 "bands": [ 00:30:27.856 { 00:30:27.856 "id": 0, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 1, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 2, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 3, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 4, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 5, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 6, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 7, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 8, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 9, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 10, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 11, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 12, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 13, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 14, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 15, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 16, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 17, 00:30:27.856 "state": "FREE", 00:30:27.856 "validity": 0.0 00:30:27.856 } 00:30:27.856 ], 00:30:27.856 "read-only": true 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "name": "cache_device", 00:30:27.856 "type": "bdev", 00:30:27.856 "chunks": [ 00:30:27.856 { 00:30:27.856 "id": 0, 00:30:27.856 "state": "INACTIVE", 00:30:27.856 "utilization": 0.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 1, 00:30:27.856 "state": "CLOSED", 00:30:27.856 "utilization": 1.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 2, 00:30:27.856 "state": "CLOSED", 00:30:27.856 "utilization": 1.0 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 3, 00:30:27.856 "state": "OPEN", 00:30:27.856 "utilization": 0.001953125 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "id": 4, 00:30:27.856 "state": "OPEN", 00:30:27.856 "utilization": 0.0 00:30:27.856 } 00:30:27.856 ], 00:30:27.856 "read-only": true 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "name": "verbose_mode", 00:30:27.856 "value": true, 00:30:27.856 "unit": "", 00:30:27.856 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:27.856 }, 00:30:27.856 { 00:30:27.856 "name": "prep_upgrade_on_shutdown", 00:30:27.856 "value": true, 00:30:27.856 "unit": "", 00:30:27.856 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:27.856 } 00:30:27.856 ] 00:30:27.856 } 00:30:27.856 19:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:27.856 19:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81924 ]] 00:30:27.856 19:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81924 00:30:27.856 19:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81924 ']' 00:30:27.857 19:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81924 00:30:27.857 19:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:27.857 19:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.857 19:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81924 00:30:27.857 killing process with pid 81924 00:30:27.857 19:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:27.857 19:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:27.857 19:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81924' 00:30:27.857 19:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81924 00:30:27.857 19:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81924 00:30:28.419 [2024-12-05 19:48:55.591198] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:28.419 [2024-12-05 19:48:55.602965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.419 [2024-12-05 19:48:55.603001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:28.419 [2024-12-05 19:48:55.603012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:28.420 [2024-12-05 19:48:55.603018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.420 [2024-12-05 19:48:55.603037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:28.420 [2024-12-05 19:48:55.605243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.420 [2024-12-05 19:48:55.605268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:28.420 [2024-12-05 19:48:55.605277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.195 ms 00:30:28.420 [2024-12-05 19:48:55.605284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.522 [2024-12-05 19:49:03.412351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.522 [2024-12-05 19:49:03.412549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:36.522 [2024-12-05 19:49:03.412577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7807.015 ms 00:30:36.522 [2024-12-05 19:49:03.412591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.522 [2024-12-05 19:49:03.413830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.522 [2024-12-05 19:49:03.413857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:36.522 [2024-12-05 19:49:03.413866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.220 ms 00:30:36.522 [2024-12-05 19:49:03.413874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.522 [2024-12-05 19:49:03.415008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.522 [2024-12-05 19:49:03.415029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:36.523 [2024-12-05 19:49:03.415038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.108 ms 00:30:36.523 [2024-12-05 19:49:03.415050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.424363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.523 [2024-12-05 19:49:03.424393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:36.523 [2024-12-05 19:49:03.424402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.268 ms 00:30:36.523 [2024-12-05 19:49:03.424409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.430237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.523 [2024-12-05 19:49:03.430266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:36.523 [2024-12-05 19:49:03.430276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.797 ms 00:30:36.523 [2024-12-05 19:49:03.430284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.430355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.523 [2024-12-05 19:49:03.430364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:36.523 [2024-12-05 19:49:03.430376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:30:36.523 [2024-12-05 19:49:03.430384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.439132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.523 [2024-12-05 19:49:03.439250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:36.523 [2024-12-05 19:49:03.439264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.734 ms 00:30:36.523 [2024-12-05 19:49:03.439272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.449925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.523 [2024-12-05 19:49:03.450055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:36.523 [2024-12-05 19:49:03.450072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.625 ms 00:30:36.523 [2024-12-05 19:49:03.450080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.458726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.523 [2024-12-05 19:49:03.458755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:36.523 [2024-12-05 19:49:03.458764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.616 ms 00:30:36.523 [2024-12-05 19:49:03.458772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.467162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.523 [2024-12-05 19:49:03.467191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:36.523 [2024-12-05 19:49:03.467200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.318 ms 00:30:36.523 [2024-12-05 19:49:03.467207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.467235] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:36.523 [2024-12-05 19:49:03.467260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:36.523 [2024-12-05 19:49:03.467270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:36.523 [2024-12-05 19:49:03.467278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:36.523 [2024-12-05 19:49:03.467287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:36.523 [2024-12-05 19:49:03.467404] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:36.523 [2024-12-05 19:49:03.467412] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 00439dda-d519-4ec7-88a6-b1dfade02cf3 00:30:36.523 [2024-12-05 19:49:03.467419] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:36.523 [2024-12-05 19:49:03.467427] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:36.523 [2024-12-05 19:49:03.467433] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:36.523 [2024-12-05 19:49:03.467441] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:36.523 [2024-12-05 19:49:03.467448] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:36.523 [2024-12-05 19:49:03.467458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:36.523 [2024-12-05 19:49:03.467465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:36.523 [2024-12-05 19:49:03.467471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:36.523 [2024-12-05 19:49:03.467477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:36.523 [2024-12-05 19:49:03.467485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.523 [2024-12-05 19:49:03.467495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:36.523 [2024-12-05 19:49:03.467503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:30:36.523 [2024-12-05 19:49:03.467509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.479691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.523 [2024-12-05 19:49:03.479718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:36.523 [2024-12-05 19:49:03.479728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.166 ms 00:30:36.523 [2024-12-05 19:49:03.479740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.480067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.523 [2024-12-05 19:49:03.480086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:36.523 [2024-12-05 19:49:03.480095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:30:36.523 [2024-12-05 19:49:03.480102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.520078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.523 [2024-12-05 19:49:03.520118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:36.523 [2024-12-05 19:49:03.520132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.523 [2024-12-05 19:49:03.520140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.520174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.523 [2024-12-05 19:49:03.520181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:36.523 [2024-12-05 19:49:03.520187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.523 [2024-12-05 19:49:03.520193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.520253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.523 [2024-12-05 19:49:03.520261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:36.523 [2024-12-05 19:49:03.520268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.523 [2024-12-05 19:49:03.520276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.520289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.523 [2024-12-05 19:49:03.520296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:36.523 [2024-12-05 19:49:03.520302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.523 [2024-12-05 19:49:03.520307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.582318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.523 [2024-12-05 19:49:03.582363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:36.523 [2024-12-05 19:49:03.582375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.523 [2024-12-05 19:49:03.582386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.631874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.523 [2024-12-05 19:49:03.632037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:36.523 [2024-12-05 19:49:03.632051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.523 [2024-12-05 19:49:03.632058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.632140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.523 [2024-12-05 19:49:03.632149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:36.523 [2024-12-05 19:49:03.632156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.523 [2024-12-05 19:49:03.632162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.632198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.523 [2024-12-05 19:49:03.632206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:36.523 [2024-12-05 19:49:03.632212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.523 [2024-12-05 19:49:03.632218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.523 [2024-12-05 19:49:03.632299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.523 [2024-12-05 19:49:03.632307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:36.524 [2024-12-05 19:49:03.632313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.524 [2024-12-05 19:49:03.632319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.524 [2024-12-05 19:49:03.632343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.524 [2024-12-05 19:49:03.632352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:36.524 [2024-12-05 19:49:03.632358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.524 [2024-12-05 19:49:03.632364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.524 [2024-12-05 19:49:03.632393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.524 [2024-12-05 19:49:03.632400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:36.524 [2024-12-05 19:49:03.632407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.524 [2024-12-05 19:49:03.632413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.524 [2024-12-05 19:49:03.632449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.524 [2024-12-05 19:49:03.632457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:36.524 [2024-12-05 19:49:03.632464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.524 [2024-12-05 19:49:03.632470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.524 [2024-12-05 19:49:03.632563] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8029.555 ms, result 0 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:44.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82504 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82504 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82504 ']' 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:44.634 19:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:44.634 [2024-12-05 19:49:11.227339] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:30:44.634 [2024-12-05 19:49:11.227893] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82504 ] 00:30:44.634 [2024-12-05 19:49:11.384747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.634 [2024-12-05 19:49:11.478199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.983 [2024-12-05 19:49:12.207442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:44.983 [2024-12-05 19:49:12.207511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:45.241 [2024-12-05 19:49:12.351706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.241 [2024-12-05 19:49:12.351745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:45.241 [2024-12-05 19:49:12.351758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:45.241 [2024-12-05 19:49:12.351766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.241 [2024-12-05 19:49:12.351814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.241 [2024-12-05 19:49:12.351823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:45.241 [2024-12-05 19:49:12.351831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:45.241 [2024-12-05 19:49:12.351838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.241 [2024-12-05 19:49:12.351859] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:45.241 [2024-12-05 19:49:12.352547] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:45.241 [2024-12-05 19:49:12.352562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.241 [2024-12-05 19:49:12.352569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:45.241 [2024-12-05 19:49:12.352577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.710 ms 00:30:45.241 [2024-12-05 19:49:12.352584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.241 [2024-12-05 19:49:12.353648] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:45.241 [2024-12-05 19:49:12.365701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.241 [2024-12-05 19:49:12.365731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:45.241 [2024-12-05 19:49:12.365746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.055 ms 00:30:45.241 [2024-12-05 19:49:12.365753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.241 [2024-12-05 19:49:12.365804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.241 [2024-12-05 19:49:12.365813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:45.241 [2024-12-05 19:49:12.365820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:45.241 [2024-12-05 19:49:12.365827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.241 [2024-12-05 19:49:12.370348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.241 [2024-12-05 19:49:12.370374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:45.241 [2024-12-05 19:49:12.370383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.460 ms 00:30:45.241 [2024-12-05 19:49:12.370390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.241 [2024-12-05 19:49:12.370442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.241 [2024-12-05 19:49:12.370452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:45.241 [2024-12-05 19:49:12.370460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:45.241 [2024-12-05 19:49:12.370467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.241 [2024-12-05 19:49:12.370506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.241 [2024-12-05 19:49:12.370517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:45.241 [2024-12-05 19:49:12.370525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:45.241 [2024-12-05 19:49:12.370532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.241 [2024-12-05 19:49:12.370551] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:45.241 [2024-12-05 19:49:12.373694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.241 [2024-12-05 19:49:12.373720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:45.241 [2024-12-05 19:49:12.373728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.147 ms 00:30:45.241 [2024-12-05 19:49:12.373739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.241 [2024-12-05 19:49:12.373767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.242 [2024-12-05 19:49:12.373775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:45.242 [2024-12-05 19:49:12.373783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:45.242 [2024-12-05 19:49:12.373790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.242 [2024-12-05 19:49:12.373811] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:45.242 [2024-12-05 19:49:12.373831] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:45.242 [2024-12-05 19:49:12.373863] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:45.242 [2024-12-05 19:49:12.373878] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:45.242 [2024-12-05 19:49:12.373978] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:45.242 [2024-12-05 19:49:12.373987] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:45.242 [2024-12-05 19:49:12.373997] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:45.242 [2024-12-05 19:49:12.374007] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:45.242 [2024-12-05 19:49:12.374015] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:45.242 [2024-12-05 19:49:12.374025] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:45.242 [2024-12-05 19:49:12.374032] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:45.242 [2024-12-05 19:49:12.374039] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:45.242 [2024-12-05 19:49:12.374046] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:45.242 [2024-12-05 19:49:12.374054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.242 [2024-12-05 19:49:12.374061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:45.242 [2024-12-05 19:49:12.374069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.245 ms 00:30:45.242 [2024-12-05 19:49:12.374075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.242 [2024-12-05 19:49:12.374161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.242 [2024-12-05 19:49:12.374169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:45.242 [2024-12-05 19:49:12.374179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:30:45.242 [2024-12-05 19:49:12.374186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.242 [2024-12-05 19:49:12.374284] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:45.242 [2024-12-05 19:49:12.374293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:45.242 [2024-12-05 19:49:12.374301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:45.242 [2024-12-05 19:49:12.374308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:45.242 [2024-12-05 19:49:12.374315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:45.242 [2024-12-05 19:49:12.374322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:45.242 [2024-12-05 19:49:12.374328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:45.242 [2024-12-05 19:49:12.374335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:45.242 [2024-12-05 19:49:12.374343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:45.242 [2024-12-05 19:49:12.374349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:45.242 [2024-12-05 19:49:12.374356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:45.242 [2024-12-05 19:49:12.374362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:45.242 [2024-12-05 19:49:12.374369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:45.242 [2024-12-05 19:49:12.374375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:45.242 [2024-12-05 19:49:12.374382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:45.242 [2024-12-05 19:49:12.374388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:45.242 [2024-12-05 19:49:12.374396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:45.242 [2024-12-05 19:49:12.374402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:45.242 [2024-12-05 19:49:12.374409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:45.242 [2024-12-05 19:49:12.374416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:45.242 [2024-12-05 19:49:12.374422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:45.242 [2024-12-05 19:49:12.374428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:45.242 [2024-12-05 19:49:12.374435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:45.242 [2024-12-05 19:49:12.374446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:45.242 [2024-12-05 19:49:12.374453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:45.242 [2024-12-05 19:49:12.374460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:45.242 [2024-12-05 19:49:12.374466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:45.242 [2024-12-05 19:49:12.374472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:45.242 [2024-12-05 19:49:12.374479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:45.242 [2024-12-05 19:49:12.374485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:45.242 [2024-12-05 19:49:12.374491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:45.242 [2024-12-05 19:49:12.374498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:45.242 [2024-12-05 19:49:12.374504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:45.242 [2024-12-05 19:49:12.374510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:45.242 [2024-12-05 19:49:12.374516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:45.242 [2024-12-05 19:49:12.374523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:45.242 [2024-12-05 19:49:12.374529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:45.242 [2024-12-05 19:49:12.374535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:45.242 [2024-12-05 19:49:12.374542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:45.242 [2024-12-05 19:49:12.374548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:45.242 [2024-12-05 19:49:12.374555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:45.242 [2024-12-05 19:49:12.374561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:45.242 [2024-12-05 19:49:12.374567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:45.242 [2024-12-05 19:49:12.374573] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:45.242 [2024-12-05 19:49:12.374581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:45.242 [2024-12-05 19:49:12.374588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:45.242 [2024-12-05 19:49:12.374594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:45.242 [2024-12-05 19:49:12.374604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:45.242 [2024-12-05 19:49:12.374614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:45.242 [2024-12-05 19:49:12.374620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:45.242 [2024-12-05 19:49:12.374627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:45.242 [2024-12-05 19:49:12.374634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:45.242 [2024-12-05 19:49:12.374640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:45.242 [2024-12-05 19:49:12.374647] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:45.242 [2024-12-05 19:49:12.374656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:45.242 [2024-12-05 19:49:12.374664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:45.242 [2024-12-05 19:49:12.374683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:45.242 [2024-12-05 19:49:12.374691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:45.242 [2024-12-05 19:49:12.374697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:45.242 [2024-12-05 19:49:12.374705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:45.242 [2024-12-05 19:49:12.374712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:45.242 [2024-12-05 19:49:12.374718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:45.242 [2024-12-05 19:49:12.374725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:45.242 [2024-12-05 19:49:12.374732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:45.242 [2024-12-05 19:49:12.374739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:45.242 [2024-12-05 19:49:12.374746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:45.242 [2024-12-05 19:49:12.374753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:45.242 [2024-12-05 19:49:12.374760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:45.242 [2024-12-05 19:49:12.374766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:45.242 [2024-12-05 19:49:12.374773] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:45.242 [2024-12-05 19:49:12.374781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:45.242 [2024-12-05 19:49:12.374788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:45.243 [2024-12-05 19:49:12.374795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:45.243 [2024-12-05 19:49:12.374802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:45.243 [2024-12-05 19:49:12.374809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:45.243 [2024-12-05 19:49:12.374817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.243 [2024-12-05 19:49:12.374824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:45.243 [2024-12-05 19:49:12.374831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.601 ms 00:30:45.243 [2024-12-05 19:49:12.374838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.243 [2024-12-05 19:49:12.374887] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:45.243 [2024-12-05 19:49:12.374898] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:47.771 [2024-12-05 19:49:14.426721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.771 [2024-12-05 19:49:14.426769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:47.771 [2024-12-05 19:49:14.426785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2051.825 ms 00:30:47.771 [2024-12-05 19:49:14.426795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.771 [2024-12-05 19:49:14.451645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.771 [2024-12-05 19:49:14.451700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:47.771 [2024-12-05 19:49:14.451713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.645 ms 00:30:47.771 [2024-12-05 19:49:14.451721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.771 [2024-12-05 19:49:14.451795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.771 [2024-12-05 19:49:14.451811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:47.771 [2024-12-05 19:49:14.451820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:47.771 [2024-12-05 19:49:14.451827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.771 [2024-12-05 19:49:14.481769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.771 [2024-12-05 19:49:14.481897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:47.771 [2024-12-05 19:49:14.481918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.895 ms 00:30:47.771 [2024-12-05 19:49:14.481925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.771 [2024-12-05 19:49:14.481957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.771 [2024-12-05 19:49:14.481965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:47.771 [2024-12-05 19:49:14.481973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:47.771 [2024-12-05 19:49:14.481981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.771 [2024-12-05 19:49:14.482308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.771 [2024-12-05 19:49:14.482323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:47.771 [2024-12-05 19:49:14.482332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.279 ms 00:30:47.771 [2024-12-05 19:49:14.482339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.771 [2024-12-05 19:49:14.482379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.771 [2024-12-05 19:49:14.482388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:47.771 [2024-12-05 19:49:14.482396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:47.771 [2024-12-05 19:49:14.482403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.771 [2024-12-05 19:49:14.496119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.496149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:47.772 [2024-12-05 19:49:14.496159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.697 ms 00:30:47.772 [2024-12-05 19:49:14.496166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.521214] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:47.772 [2024-12-05 19:49:14.521253] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:47.772 [2024-12-05 19:49:14.521267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.521276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:47.772 [2024-12-05 19:49:14.521286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.006 ms 00:30:47.772 [2024-12-05 19:49:14.521293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.534519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.534654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:47.772 [2024-12-05 19:49:14.534684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.187 ms 00:30:47.772 [2024-12-05 19:49:14.534692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.545811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.545841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:47.772 [2024-12-05 19:49:14.545851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.083 ms 00:30:47.772 [2024-12-05 19:49:14.545858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.557012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.557163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:47.772 [2024-12-05 19:49:14.557177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.120 ms 00:30:47.772 [2024-12-05 19:49:14.557185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.557819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.557840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:47.772 [2024-12-05 19:49:14.557849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.538 ms 00:30:47.772 [2024-12-05 19:49:14.557856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.610873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.610919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:47.772 [2024-12-05 19:49:14.610930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.998 ms 00:30:47.772 [2024-12-05 19:49:14.610938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.621359] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:47.772 [2024-12-05 19:49:14.622027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.622055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:47.772 [2024-12-05 19:49:14.622065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.046 ms 00:30:47.772 [2024-12-05 19:49:14.622073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.622143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.622155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:47.772 [2024-12-05 19:49:14.622164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:47.772 [2024-12-05 19:49:14.622171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.622220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.622230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:47.772 [2024-12-05 19:49:14.622238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:47.772 [2024-12-05 19:49:14.622245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.622266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.622274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:47.772 [2024-12-05 19:49:14.622284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:47.772 [2024-12-05 19:49:14.622291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.622321] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:47.772 [2024-12-05 19:49:14.622330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.622337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:47.772 [2024-12-05 19:49:14.622345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:47.772 [2024-12-05 19:49:14.622352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.644568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.644714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:47.772 [2024-12-05 19:49:14.644730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.198 ms 00:30:47.772 [2024-12-05 19:49:14.644738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.644801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.644811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:47.772 [2024-12-05 19:49:14.644819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:47.772 [2024-12-05 19:49:14.644826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.645730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2293.606 ms, result 0 00:30:47.772 [2024-12-05 19:49:14.661015] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:47.772 [2024-12-05 19:49:14.677004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:47.772 [2024-12-05 19:49:14.685110] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:47.772 19:49:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.772 19:49:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:47.772 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:47.772 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:47.772 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:47.772 [2024-12-05 19:49:14.909201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.909241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:47.772 [2024-12-05 19:49:14.909256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:47.772 [2024-12-05 19:49:14.909265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.909287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.909296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:47.772 [2024-12-05 19:49:14.909304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:47.772 [2024-12-05 19:49:14.909311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.909330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.772 [2024-12-05 19:49:14.909338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:47.772 [2024-12-05 19:49:14.909346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:47.772 [2024-12-05 19:49:14.909353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.772 [2024-12-05 19:49:14.909407] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.196 ms, result 0 00:30:47.772 true 00:30:47.772 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:48.029 { 00:30:48.029 "name": "ftl", 00:30:48.029 "properties": [ 00:30:48.029 { 00:30:48.029 "name": "superblock_version", 00:30:48.029 "value": 5, 00:30:48.029 "read-only": true 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "name": "base_device", 00:30:48.029 "bands": [ 00:30:48.029 { 00:30:48.029 "id": 0, 00:30:48.029 "state": "CLOSED", 00:30:48.029 "validity": 1.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 1, 00:30:48.029 "state": "CLOSED", 00:30:48.029 "validity": 1.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 2, 00:30:48.029 "state": "CLOSED", 00:30:48.029 "validity": 0.007843137254901933 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 3, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 4, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 5, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 6, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 7, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 8, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 9, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 10, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 11, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 12, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 13, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 14, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 15, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 16, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 17, 00:30:48.029 "state": "FREE", 00:30:48.029 "validity": 0.0 00:30:48.029 } 00:30:48.029 ], 00:30:48.029 "read-only": true 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "name": "cache_device", 00:30:48.029 "type": "bdev", 00:30:48.029 "chunks": [ 00:30:48.029 { 00:30:48.029 "id": 0, 00:30:48.029 "state": "INACTIVE", 00:30:48.029 "utilization": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 1, 00:30:48.029 "state": "OPEN", 00:30:48.029 "utilization": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 2, 00:30:48.029 "state": "OPEN", 00:30:48.029 "utilization": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 3, 00:30:48.029 "state": "FREE", 00:30:48.029 "utilization": 0.0 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "id": 4, 00:30:48.029 "state": "FREE", 00:30:48.029 "utilization": 0.0 00:30:48.029 } 00:30:48.029 ], 00:30:48.029 "read-only": true 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "name": "verbose_mode", 00:30:48.029 "value": true, 00:30:48.029 "unit": "", 00:30:48.029 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:48.029 }, 00:30:48.029 { 00:30:48.029 "name": "prep_upgrade_on_shutdown", 00:30:48.029 "value": false, 00:30:48.029 "unit": "", 00:30:48.029 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:48.029 } 00:30:48.029 ] 00:30:48.029 } 00:30:48.029 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:48.029 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:48.029 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:48.287 Validate MD5 checksum, iteration 1 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:48.287 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:48.543 [2024-12-05 19:49:15.591993] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:30:48.543 [2024-12-05 19:49:15.592242] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82571 ] 00:30:48.543 [2024-12-05 19:49:15.751421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.800 [2024-12-05 19:49:15.847126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.171  [2024-12-05T19:49:17.992Z] Copying: 650/1024 [MB] (650 MBps) [2024-12-05T19:49:18.934Z] Copying: 1024/1024 [MB] (average 660 MBps) 00:30:51.679 00:30:51.936 19:49:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:51.936 19:49:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:53.870 19:49:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:53.870 19:49:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7b7c500f4a7c07751e89f45c4a373719 00:30:53.870 19:49:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7b7c500f4a7c07751e89f45c4a373719 != \7\b\7\c\5\0\0\f\4\a\7\c\0\7\7\5\1\e\8\9\f\4\5\c\4\a\3\7\3\7\1\9 ]] 00:30:53.870 19:49:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:53.870 19:49:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:53.870 Validate MD5 checksum, iteration 2 00:30:53.870 19:49:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:53.870 19:49:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:53.870 19:49:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:53.870 19:49:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:53.870 19:49:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:53.870 19:49:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:53.870 19:49:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:54.128 [2024-12-05 19:49:21.171441] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:30:54.128 [2024-12-05 19:49:21.171597] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82632 ] 00:30:54.128 [2024-12-05 19:49:21.345420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.385 [2024-12-05 19:49:21.442466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.804  [2024-12-05T19:49:23.626Z] Copying: 684/1024 [MB] (684 MBps) [2024-12-05T19:49:31.732Z] Copying: 1024/1024 [MB] (average 669 MBps) 00:31:04.477 00:31:04.477 19:49:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:04.477 19:49:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=66ebc005a81859352e18c5c481c03108 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 66ebc005a81859352e18c5c481c03108 != \6\6\e\b\c\0\0\5\a\8\1\8\5\9\3\5\2\e\1\8\c\5\c\4\8\1\c\0\3\1\0\8 ]] 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 82504 ]] 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 82504 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82758 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82758 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82758 ']' 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.851 19:49:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:05.851 [2024-12-05 19:49:32.861213] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:31:05.851 [2024-12-05 19:49:32.861491] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82758 ] 00:31:05.851 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 82504 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:05.851 [2024-12-05 19:49:33.018390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.851 [2024-12-05 19:49:33.101092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.802 [2024-12-05 19:49:33.690115] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:06.802 [2024-12-05 19:49:33.690321] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:06.802 [2024-12-05 19:49:33.833525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.802 [2024-12-05 19:49:33.833713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:06.802 [2024-12-05 19:49:33.833730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:06.802 [2024-12-05 19:49:33.833738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.802 [2024-12-05 19:49:33.833789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.802 [2024-12-05 19:49:33.833798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:06.802 [2024-12-05 19:49:33.833804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:31:06.802 [2024-12-05 19:49:33.833811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.802 [2024-12-05 19:49:33.833832] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:06.802 [2024-12-05 19:49:33.834397] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:06.802 [2024-12-05 19:49:33.834410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.802 [2024-12-05 19:49:33.834417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:06.802 [2024-12-05 19:49:33.834424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.586 ms 00:31:06.802 [2024-12-05 19:49:33.834430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.802 [2024-12-05 19:49:33.834661] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:06.802 [2024-12-05 19:49:33.847313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.802 [2024-12-05 19:49:33.847344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:06.802 [2024-12-05 19:49:33.847355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.652 ms 00:31:06.802 [2024-12-05 19:49:33.847362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.802 [2024-12-05 19:49:33.854468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.802 [2024-12-05 19:49:33.854497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:06.802 [2024-12-05 19:49:33.854504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:06.802 [2024-12-05 19:49:33.854510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.802 [2024-12-05 19:49:33.854775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.802 [2024-12-05 19:49:33.854785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:06.802 [2024-12-05 19:49:33.854792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:31:06.802 [2024-12-05 19:49:33.854798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.802 [2024-12-05 19:49:33.854838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.802 [2024-12-05 19:49:33.854845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:06.802 [2024-12-05 19:49:33.854852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:31:06.802 [2024-12-05 19:49:33.854858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.802 [2024-12-05 19:49:33.854876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.802 [2024-12-05 19:49:33.854883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:06.802 [2024-12-05 19:49:33.854889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:06.802 [2024-12-05 19:49:33.854895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.802 [2024-12-05 19:49:33.854910] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:06.802 [2024-12-05 19:49:33.857299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.802 [2024-12-05 19:49:33.857412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:06.802 [2024-12-05 19:49:33.857425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.392 ms 00:31:06.802 [2024-12-05 19:49:33.857432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.802 [2024-12-05 19:49:33.857459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.802 [2024-12-05 19:49:33.857467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:06.802 [2024-12-05 19:49:33.857474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:06.802 [2024-12-05 19:49:33.857481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.802 [2024-12-05 19:49:33.857499] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:06.802 [2024-12-05 19:49:33.857516] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:06.802 [2024-12-05 19:49:33.857545] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:06.802 [2024-12-05 19:49:33.857561] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:06.802 [2024-12-05 19:49:33.857646] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:06.802 [2024-12-05 19:49:33.857655] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:06.802 [2024-12-05 19:49:33.857664] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:06.802 [2024-12-05 19:49:33.857692] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:06.802 [2024-12-05 19:49:33.857701] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:06.802 [2024-12-05 19:49:33.857708] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:06.803 [2024-12-05 19:49:33.857715] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:06.803 [2024-12-05 19:49:33.857721] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:06.803 [2024-12-05 19:49:33.857728] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:06.803 [2024-12-05 19:49:33.857737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.803 [2024-12-05 19:49:33.857744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:06.803 [2024-12-05 19:49:33.857751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.240 ms 00:31:06.803 [2024-12-05 19:49:33.857757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.803 [2024-12-05 19:49:33.857825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.803 [2024-12-05 19:49:33.857832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:06.803 [2024-12-05 19:49:33.857839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:31:06.803 [2024-12-05 19:49:33.857845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.803 [2024-12-05 19:49:33.857924] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:06.803 [2024-12-05 19:49:33.857934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:06.803 [2024-12-05 19:49:33.857942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:06.803 [2024-12-05 19:49:33.857948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.803 [2024-12-05 19:49:33.857955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:06.803 [2024-12-05 19:49:33.857961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:06.803 [2024-12-05 19:49:33.857967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:06.803 [2024-12-05 19:49:33.857974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:06.803 [2024-12-05 19:49:33.857980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:06.803 [2024-12-05 19:49:33.857985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.803 [2024-12-05 19:49:33.857992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:06.803 [2024-12-05 19:49:33.857998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:06.803 [2024-12-05 19:49:33.858008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.803 [2024-12-05 19:49:33.858015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:06.803 [2024-12-05 19:49:33.858021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:06.803 [2024-12-05 19:49:33.858026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.803 [2024-12-05 19:49:33.858033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:06.803 [2024-12-05 19:49:33.858038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:06.803 [2024-12-05 19:49:33.858044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.803 [2024-12-05 19:49:33.858050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:06.803 [2024-12-05 19:49:33.858056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:06.803 [2024-12-05 19:49:33.858068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.803 [2024-12-05 19:49:33.858074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:06.803 [2024-12-05 19:49:33.858080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:06.803 [2024-12-05 19:49:33.858085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.803 [2024-12-05 19:49:33.858091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:06.803 [2024-12-05 19:49:33.858097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:06.803 [2024-12-05 19:49:33.858103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.803 [2024-12-05 19:49:33.858109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:06.803 [2024-12-05 19:49:33.858115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:06.803 [2024-12-05 19:49:33.858121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.803 [2024-12-05 19:49:33.858127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:06.803 [2024-12-05 19:49:33.858133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:06.803 [2024-12-05 19:49:33.858139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.803 [2024-12-05 19:49:33.858145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:06.803 [2024-12-05 19:49:33.858151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:06.803 [2024-12-05 19:49:33.858157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.803 [2024-12-05 19:49:33.858163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:06.803 [2024-12-05 19:49:33.858169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:06.803 [2024-12-05 19:49:33.858174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.803 [2024-12-05 19:49:33.858180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:06.803 [2024-12-05 19:49:33.858186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:06.803 [2024-12-05 19:49:33.858191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.803 [2024-12-05 19:49:33.858197] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:06.803 [2024-12-05 19:49:33.858205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:06.803 [2024-12-05 19:49:33.858212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:06.803 [2024-12-05 19:49:33.858218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.803 [2024-12-05 19:49:33.858225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:06.803 [2024-12-05 19:49:33.858231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:06.803 [2024-12-05 19:49:33.858237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:06.803 [2024-12-05 19:49:33.858243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:06.803 [2024-12-05 19:49:33.858249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:06.803 [2024-12-05 19:49:33.858255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:06.803 [2024-12-05 19:49:33.858262] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:06.803 [2024-12-05 19:49:33.858270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.803 [2024-12-05 19:49:33.858277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:06.803 [2024-12-05 19:49:33.858284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:06.803 [2024-12-05 19:49:33.858290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:06.803 [2024-12-05 19:49:33.858297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:06.803 [2024-12-05 19:49:33.858303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:06.803 [2024-12-05 19:49:33.858310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:06.803 [2024-12-05 19:49:33.858316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:06.803 [2024-12-05 19:49:33.858322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:06.803 [2024-12-05 19:49:33.858328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:06.803 [2024-12-05 19:49:33.858336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:06.803 [2024-12-05 19:49:33.858343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:06.803 [2024-12-05 19:49:33.858349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:06.803 [2024-12-05 19:49:33.858356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:06.803 [2024-12-05 19:49:33.858362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:06.803 [2024-12-05 19:49:33.858368] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:06.803 [2024-12-05 19:49:33.858375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.803 [2024-12-05 19:49:33.858384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:06.803 [2024-12-05 19:49:33.858390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:06.803 [2024-12-05 19:49:33.858397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:06.803 [2024-12-05 19:49:33.858403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:06.803 [2024-12-05 19:49:33.858410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.803 [2024-12-05 19:49:33.858418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:06.803 [2024-12-05 19:49:33.858424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:31:06.803 [2024-12-05 19:49:33.858431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.803 [2024-12-05 19:49:33.878059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.803 [2024-12-05 19:49:33.878087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:06.803 [2024-12-05 19:49:33.878096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.575 ms 00:31:06.803 [2024-12-05 19:49:33.878102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.803 [2024-12-05 19:49:33.878133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.803 [2024-12-05 19:49:33.878140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:06.803 [2024-12-05 19:49:33.878146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:06.803 [2024-12-05 19:49:33.878152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.803 [2024-12-05 19:49:33.902642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.803 [2024-12-05 19:49:33.902682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:06.803 [2024-12-05 19:49:33.902691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.447 ms 00:31:06.804 [2024-12-05 19:49:33.902698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:33.902721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:33.902728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:06.804 [2024-12-05 19:49:33.902735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:06.804 [2024-12-05 19:49:33.902742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:33.902817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:33.902826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:06.804 [2024-12-05 19:49:33.902832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:06.804 [2024-12-05 19:49:33.902838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:33.902869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:33.902876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:06.804 [2024-12-05 19:49:33.902882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:06.804 [2024-12-05 19:49:33.902888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:33.914770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:33.914797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:06.804 [2024-12-05 19:49:33.914805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.861 ms 00:31:06.804 [2024-12-05 19:49:33.914811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:33.914894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:33.914903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:06.804 [2024-12-05 19:49:33.914909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:06.804 [2024-12-05 19:49:33.914916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:33.942565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:33.942609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:06.804 [2024-12-05 19:49:33.942624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.633 ms 00:31:06.804 [2024-12-05 19:49:33.942634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:33.950722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:33.950749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:06.804 [2024-12-05 19:49:33.950764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:31:06.804 [2024-12-05 19:49:33.950771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:33.995362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:33.995410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:06.804 [2024-12-05 19:49:33.995421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.541 ms 00:31:06.804 [2024-12-05 19:49:33.995428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:33.995543] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:06.804 [2024-12-05 19:49:33.995622] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:06.804 [2024-12-05 19:49:33.995710] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:06.804 [2024-12-05 19:49:33.995786] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:06.804 [2024-12-05 19:49:33.995794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:33.995801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:06.804 [2024-12-05 19:49:33.995809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.324 ms 00:31:06.804 [2024-12-05 19:49:33.995815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:33.995868] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:06.804 [2024-12-05 19:49:33.995877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:33.995887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:06.804 [2024-12-05 19:49:33.995893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:06.804 [2024-12-05 19:49:33.995899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:34.007516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:34.007550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:06.804 [2024-12-05 19:49:34.007561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.599 ms 00:31:06.804 [2024-12-05 19:49:34.007568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:34.014245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:34.014272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:06.804 [2024-12-05 19:49:34.014280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:06.804 [2024-12-05 19:49:34.014287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.804 [2024-12-05 19:49:34.014357] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:06.804 [2024-12-05 19:49:34.014468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.804 [2024-12-05 19:49:34.014477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:06.804 [2024-12-05 19:49:34.014484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.112 ms 00:31:06.804 [2024-12-05 19:49:34.014489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.369 [2024-12-05 19:49:34.444045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.369 [2024-12-05 19:49:34.444240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:07.369 [2024-12-05 19:49:34.444263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 428.858 ms 00:31:07.369 [2024-12-05 19:49:34.444272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.369 [2024-12-05 19:49:34.448096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.369 [2024-12-05 19:49:34.448133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:07.369 [2024-12-05 19:49:34.448143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.795 ms 00:31:07.369 [2024-12-05 19:49:34.448152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.369 [2024-12-05 19:49:34.448486] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:07.369 [2024-12-05 19:49:34.448512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.369 [2024-12-05 19:49:34.448521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:07.369 [2024-12-05 19:49:34.448529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.338 ms 00:31:07.369 [2024-12-05 19:49:34.448537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.369 [2024-12-05 19:49:34.448565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.369 [2024-12-05 19:49:34.448574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:07.369 [2024-12-05 19:49:34.448582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:07.369 [2024-12-05 19:49:34.448594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.369 [2024-12-05 19:49:34.448636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 434.266 ms, result 0 00:31:07.369 [2024-12-05 19:49:34.448707] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:07.369 [2024-12-05 19:49:34.448809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.369 [2024-12-05 19:49:34.448820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:07.369 [2024-12-05 19:49:34.448829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.103 ms 00:31:07.369 [2024-12-05 19:49:34.448836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.627 [2024-12-05 19:49:34.862009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.627 [2024-12-05 19:49:34.862067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:07.627 [2024-12-05 19:49:34.862093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 412.233 ms 00:31:07.627 [2024-12-05 19:49:34.862102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.627 [2024-12-05 19:49:34.866400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.627 [2024-12-05 19:49:34.866438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:07.627 [2024-12-05 19:49:34.866449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.817 ms 00:31:07.627 [2024-12-05 19:49:34.866457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.627 [2024-12-05 19:49:34.866749] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:07.627 [2024-12-05 19:49:34.866791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.627 [2024-12-05 19:49:34.866799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:07.627 [2024-12-05 19:49:34.866808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:31:07.627 [2024-12-05 19:49:34.866816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.627 [2024-12-05 19:49:34.866846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.627 [2024-12-05 19:49:34.866854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:07.627 [2024-12-05 19:49:34.866862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:07.627 [2024-12-05 19:49:34.866869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.627 [2024-12-05 19:49:34.866903] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 418.217 ms, result 0 00:31:07.627 [2024-12-05 19:49:34.866942] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:07.627 [2024-12-05 19:49:34.866952] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:07.627 [2024-12-05 19:49:34.866961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.627 [2024-12-05 19:49:34.866969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:07.627 [2024-12-05 19:49:34.866977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 852.621 ms 00:31:07.627 [2024-12-05 19:49:34.866984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.627 [2024-12-05 19:49:34.867011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.627 [2024-12-05 19:49:34.867023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:07.627 [2024-12-05 19:49:34.867030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:07.627 [2024-12-05 19:49:34.867038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.627 [2024-12-05 19:49:34.878245] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:07.627 [2024-12-05 19:49:34.878468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.627 [2024-12-05 19:49:34.878491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:07.627 [2024-12-05 19:49:34.878502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.414 ms 00:31:07.627 [2024-12-05 19:49:34.878509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.627 [2024-12-05 19:49:34.879222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.627 [2024-12-05 19:49:34.879243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:07.627 [2024-12-05 19:49:34.879256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.639 ms 00:31:07.627 [2024-12-05 19:49:34.879264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.885 [2024-12-05 19:49:34.881516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.885 [2024-12-05 19:49:34.881635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:07.885 [2024-12-05 19:49:34.881650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.234 ms 00:31:07.885 [2024-12-05 19:49:34.881658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.885 [2024-12-05 19:49:34.881707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.885 [2024-12-05 19:49:34.881716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:07.885 [2024-12-05 19:49:34.881724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:07.885 [2024-12-05 19:49:34.881736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.885 [2024-12-05 19:49:34.881834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.885 [2024-12-05 19:49:34.881844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:07.886 [2024-12-05 19:49:34.881852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:07.886 [2024-12-05 19:49:34.881859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.886 [2024-12-05 19:49:34.881877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.886 [2024-12-05 19:49:34.881885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:07.886 [2024-12-05 19:49:34.881892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:07.886 [2024-12-05 19:49:34.881899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.886 [2024-12-05 19:49:34.881928] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:07.886 [2024-12-05 19:49:34.881937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.886 [2024-12-05 19:49:34.881945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:07.886 [2024-12-05 19:49:34.881952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:07.886 [2024-12-05 19:49:34.881959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.886 [2024-12-05 19:49:34.882009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.886 [2024-12-05 19:49:34.882017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:07.886 [2024-12-05 19:49:34.882025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:31:07.886 [2024-12-05 19:49:34.882032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.886 [2024-12-05 19:49:34.882930] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1048.978 ms, result 0 00:31:07.886 [2024-12-05 19:49:34.895239] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.886 [2024-12-05 19:49:34.911235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:07.886 [2024-12-05 19:49:34.919349] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:08.142 Validate MD5 checksum, iteration 1 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:08.142 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:08.399 [2024-12-05 19:49:35.414769] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:31:08.399 [2024-12-05 19:49:35.414980] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82791 ] 00:31:08.399 [2024-12-05 19:49:35.569113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.657 [2024-12-05 19:49:35.668193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.032  [2024-12-05T19:49:37.853Z] Copying: 685/1024 [MB] (685 MBps) [2024-12-05T19:49:46.010Z] Copying: 1024/1024 [MB] (average 672 MBps) 00:31:18.755 00:31:18.755 19:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:18.755 19:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:20.128 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:20.128 Validate MD5 checksum, iteration 2 00:31:20.128 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7b7c500f4a7c07751e89f45c4a373719 00:31:20.128 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7b7c500f4a7c07751e89f45c4a373719 != \7\b\7\c\5\0\0\f\4\a\7\c\0\7\7\5\1\e\8\9\f\4\5\c\4\a\3\7\3\7\1\9 ]] 00:31:20.128 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:20.128 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:20.128 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:20.128 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:20.128 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:20.128 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:20.128 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:20.128 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:20.128 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:20.128 [2024-12-05 19:49:47.281647] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:31:20.128 [2024-12-05 19:49:47.281773] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82919 ] 00:31:20.386 [2024-12-05 19:49:47.443105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.386 [2024-12-05 19:49:47.538219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.286  [2024-12-05T19:49:49.798Z] Copying: 669/1024 [MB] (669 MBps) [2024-12-05T19:49:51.178Z] Copying: 1024/1024 [MB] (average 663 MBps) 00:31:23.923 00:31:23.923 19:49:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:23.923 19:49:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:25.830 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:25.830 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=66ebc005a81859352e18c5c481c03108 00:31:25.830 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 66ebc005a81859352e18c5c481c03108 != \6\6\e\b\c\0\0\5\a\8\1\8\5\9\3\5\2\e\1\8\c\5\c\4\8\1\c\0\3\1\0\8 ]] 00:31:25.830 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:25.830 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:25.830 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:25.830 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:25.830 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:25.830 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82758 ]] 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82758 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82758 ']' 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82758 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82758 00:31:25.830 killing process with pid 82758 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82758' 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82758 00:31:25.830 19:49:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82758 00:31:26.395 [2024-12-05 19:49:53.564980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:26.395 [2024-12-05 19:49:53.576952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.576984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:26.395 [2024-12-05 19:49:53.576994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:26.395 [2024-12-05 19:49:53.577001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.577019] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:26.395 [2024-12-05 19:49:53.579119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.579142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:26.395 [2024-12-05 19:49:53.579154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.089 ms 00:31:26.395 [2024-12-05 19:49:53.579161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.579354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.579363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:26.395 [2024-12-05 19:49:53.579369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.176 ms 00:31:26.395 [2024-12-05 19:49:53.579375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.580479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.580586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:26.395 [2024-12-05 19:49:53.580598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.091 ms 00:31:26.395 [2024-12-05 19:49:53.580608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.581496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.581511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:26.395 [2024-12-05 19:49:53.581519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.858 ms 00:31:26.395 [2024-12-05 19:49:53.581525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.588680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.588704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:26.395 [2024-12-05 19:49:53.588711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.127 ms 00:31:26.395 [2024-12-05 19:49:53.588721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.592778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.592800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:26.395 [2024-12-05 19:49:53.592808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.042 ms 00:31:26.395 [2024-12-05 19:49:53.592814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.592883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.592891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:26.395 [2024-12-05 19:49:53.592898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:31:26.395 [2024-12-05 19:49:53.592907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.600204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.600228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:26.395 [2024-12-05 19:49:53.600234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.284 ms 00:31:26.395 [2024-12-05 19:49:53.600240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.607430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.607529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:26.395 [2024-12-05 19:49:53.607541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.176 ms 00:31:26.395 [2024-12-05 19:49:53.607546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.614336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.614427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:26.395 [2024-12-05 19:49:53.614437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.774 ms 00:31:26.395 [2024-12-05 19:49:53.614443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.621395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.621486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:26.395 [2024-12-05 19:49:53.621496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.916 ms 00:31:26.395 [2024-12-05 19:49:53.621501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.621516] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:26.395 [2024-12-05 19:49:53.621527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:26.395 [2024-12-05 19:49:53.621534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:26.395 [2024-12-05 19:49:53.621540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:26.395 [2024-12-05 19:49:53.621546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:26.395 [2024-12-05 19:49:53.621632] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:26.395 [2024-12-05 19:49:53.621638] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 00439dda-d519-4ec7-88a6-b1dfade02cf3 00:31:26.395 [2024-12-05 19:49:53.621644] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:26.395 [2024-12-05 19:49:53.621649] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:26.395 [2024-12-05 19:49:53.621654] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:26.395 [2024-12-05 19:49:53.621660] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:26.395 [2024-12-05 19:49:53.621665] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:26.395 [2024-12-05 19:49:53.621688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:26.395 [2024-12-05 19:49:53.621697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:26.395 [2024-12-05 19:49:53.621702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:26.395 [2024-12-05 19:49:53.621707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:26.395 [2024-12-05 19:49:53.621713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.621720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:26.395 [2024-12-05 19:49:53.621727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:31:26.395 [2024-12-05 19:49:53.621733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.631298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.631320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:26.395 [2024-12-05 19:49:53.631328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.551 ms 00:31:26.395 [2024-12-05 19:49:53.631334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.395 [2024-12-05 19:49:53.631602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.395 [2024-12-05 19:49:53.631609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:26.395 [2024-12-05 19:49:53.631615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:31:26.395 [2024-12-05 19:49:53.631622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.652 [2024-12-05 19:49:53.664341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.652 [2024-12-05 19:49:53.664444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:26.652 [2024-12-05 19:49:53.664455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.652 [2024-12-05 19:49:53.664462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.652 [2024-12-05 19:49:53.664492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.652 [2024-12-05 19:49:53.664499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:26.652 [2024-12-05 19:49:53.664505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.652 [2024-12-05 19:49:53.664511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.652 [2024-12-05 19:49:53.664564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.652 [2024-12-05 19:49:53.664572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:26.652 [2024-12-05 19:49:53.664578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.652 [2024-12-05 19:49:53.664584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.652 [2024-12-05 19:49:53.664599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.652 [2024-12-05 19:49:53.664606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:26.652 [2024-12-05 19:49:53.664611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.652 [2024-12-05 19:49:53.664617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.652 [2024-12-05 19:49:53.723037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.652 [2024-12-05 19:49:53.723170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:26.652 [2024-12-05 19:49:53.723183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.652 [2024-12-05 19:49:53.723190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.652 [2024-12-05 19:49:53.772178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.652 [2024-12-05 19:49:53.772214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:26.652 [2024-12-05 19:49:53.772222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.652 [2024-12-05 19:49:53.772229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.652 [2024-12-05 19:49:53.772297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.652 [2024-12-05 19:49:53.772305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:26.652 [2024-12-05 19:49:53.772311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.652 [2024-12-05 19:49:53.772317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.652 [2024-12-05 19:49:53.772349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.652 [2024-12-05 19:49:53.772364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:26.652 [2024-12-05 19:49:53.772371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.652 [2024-12-05 19:49:53.772376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.652 [2024-12-05 19:49:53.772443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.653 [2024-12-05 19:49:53.772450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:26.653 [2024-12-05 19:49:53.772457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.653 [2024-12-05 19:49:53.772462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.653 [2024-12-05 19:49:53.772485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.653 [2024-12-05 19:49:53.772492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:26.653 [2024-12-05 19:49:53.772500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.653 [2024-12-05 19:49:53.772506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.653 [2024-12-05 19:49:53.772532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.653 [2024-12-05 19:49:53.772538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:26.653 [2024-12-05 19:49:53.772544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.653 [2024-12-05 19:49:53.772550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.653 [2024-12-05 19:49:53.772582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.653 [2024-12-05 19:49:53.772592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:26.653 [2024-12-05 19:49:53.772598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.653 [2024-12-05 19:49:53.772603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.653 [2024-12-05 19:49:53.772726] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 195.733 ms, result 0 00:31:27.217 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:27.218 Remove shared memory files 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid82504 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:27.218 ************************************ 00:31:27.218 END TEST ftl_upgrade_shutdown 00:31:27.218 ************************************ 00:31:27.218 00:31:27.218 real 1m38.882s 00:31:27.218 user 2m12.713s 00:31:27.218 sys 0m19.193s 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:27.218 19:49:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:27.218 Process with pid 75403 is not found 00:31:27.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.218 19:49:54 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:31:27.218 19:49:54 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:27.218 19:49:54 ftl -- ftl/ftl.sh@14 -- # killprocess 75403 00:31:27.218 19:49:54 ftl -- common/autotest_common.sh@954 -- # '[' -z 75403 ']' 00:31:27.218 19:49:54 ftl -- common/autotest_common.sh@958 -- # kill -0 75403 00:31:27.218 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75403) - No such process 00:31:27.218 19:49:54 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75403 is not found' 00:31:27.218 19:49:54 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:27.218 19:49:54 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83028 00:31:27.218 19:49:54 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83028 00:31:27.218 19:49:54 ftl -- common/autotest_common.sh@835 -- # '[' -z 83028 ']' 00:31:27.218 19:49:54 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.218 19:49:54 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.218 19:49:54 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.218 19:49:54 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.218 19:49:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:27.218 19:49:54 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:27.476 [2024-12-05 19:49:54.510050] Starting SPDK v25.01-pre git sha1 e2dfdf06c / DPDK 24.03.0 initialization... 00:31:27.476 [2024-12-05 19:49:54.510294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83028 ] 00:31:27.476 [2024-12-05 19:49:54.668089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.734 [2024-12-05 19:49:54.763754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.301 19:49:55 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.301 19:49:55 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:28.301 19:49:55 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:28.558 nvme0n1 00:31:28.558 19:49:55 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:28.558 19:49:55 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:28.558 19:49:55 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:28.558 19:49:55 ftl -- ftl/common.sh@28 -- # stores=c6dd6603-4430-494a-8aec-16330aa992fc 00:31:28.558 19:49:55 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:28.558 19:49:55 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c6dd6603-4430-494a-8aec-16330aa992fc 00:31:28.842 19:49:56 ftl -- ftl/ftl.sh@23 -- # killprocess 83028 00:31:28.842 19:49:56 ftl -- common/autotest_common.sh@954 -- # '[' -z 83028 ']' 00:31:28.842 19:49:56 ftl -- common/autotest_common.sh@958 -- # kill -0 83028 00:31:28.842 19:49:56 ftl -- common/autotest_common.sh@959 -- # uname 00:31:28.842 19:49:56 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:28.842 19:49:56 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83028 00:31:28.842 killing process with pid 83028 00:31:28.842 19:49:56 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:28.842 19:49:56 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:28.843 19:49:56 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83028' 00:31:28.843 19:49:56 ftl -- common/autotest_common.sh@973 -- # kill 83028 00:31:28.843 19:49:56 ftl -- common/autotest_common.sh@978 -- # wait 83028 00:31:30.216 19:49:57 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:30.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:30.475 Waiting for block devices as requested 00:31:30.475 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:30.733 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:30.733 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:30.733 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:36.057 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:36.057 19:50:02 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:36.057 19:50:02 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:36.057 Remove shared memory files 00:31:36.057 19:50:02 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:36.057 19:50:02 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:36.057 19:50:02 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:36.057 19:50:02 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:36.057 19:50:02 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:36.057 00:31:36.057 real 11m11.559s 00:31:36.057 user 13m20.255s 00:31:36.057 sys 1m11.748s 00:31:36.057 19:50:02 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:36.057 19:50:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:36.057 ************************************ 00:31:36.057 END TEST ftl 00:31:36.057 ************************************ 00:31:36.057 19:50:03 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:36.057 19:50:03 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:36.057 19:50:03 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:36.057 19:50:03 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:36.057 19:50:03 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:36.057 19:50:03 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:36.057 19:50:03 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:36.057 19:50:03 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:36.057 19:50:03 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:36.057 19:50:03 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:36.057 19:50:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:36.057 19:50:03 -- common/autotest_common.sh@10 -- # set +x 00:31:36.057 19:50:03 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:36.057 19:50:03 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:36.057 19:50:03 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:36.057 19:50:03 -- common/autotest_common.sh@10 -- # set +x 00:31:36.991 INFO: APP EXITING 00:31:36.991 INFO: killing all VMs 00:31:36.991 INFO: killing vhost app 00:31:36.991 INFO: EXIT DONE 00:31:37.253 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:37.817 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:37.817 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:37.817 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:37.818 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:38.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:38.336 Cleaning 00:31:38.336 Removing: /var/run/dpdk/spdk0/config 00:31:38.336 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:38.336 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:38.336 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:38.336 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:38.336 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:38.336 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:38.336 Removing: /var/run/dpdk/spdk0 00:31:38.336 Removing: /var/run/dpdk/spdk_pid57109 00:31:38.336 Removing: /var/run/dpdk/spdk_pid57311 00:31:38.336 Removing: /var/run/dpdk/spdk_pid57523 00:31:38.336 Removing: /var/run/dpdk/spdk_pid57622 00:31:38.336 Removing: /var/run/dpdk/spdk_pid57667 00:31:38.336 Removing: /var/run/dpdk/spdk_pid57789 00:31:38.336 Removing: /var/run/dpdk/spdk_pid57807 00:31:38.336 Removing: /var/run/dpdk/spdk_pid58001 00:31:38.336 Removing: /var/run/dpdk/spdk_pid58094 00:31:38.336 Removing: /var/run/dpdk/spdk_pid58190 00:31:38.336 Removing: /var/run/dpdk/spdk_pid58301 00:31:38.336 Removing: /var/run/dpdk/spdk_pid58398 00:31:38.336 Removing: /var/run/dpdk/spdk_pid58436 00:31:38.336 Removing: /var/run/dpdk/spdk_pid58474 00:31:38.336 Removing: /var/run/dpdk/spdk_pid58544 00:31:38.336 Removing: /var/run/dpdk/spdk_pid58645 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59087 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59145 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59208 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59224 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59332 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59348 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59474 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59495 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59554 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59572 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59636 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59654 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59838 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59869 00:31:38.336 Removing: /var/run/dpdk/spdk_pid59958 00:31:38.336 Removing: /var/run/dpdk/spdk_pid60141 00:31:38.336 Removing: /var/run/dpdk/spdk_pid60227 00:31:38.336 Removing: /var/run/dpdk/spdk_pid60269 00:31:38.336 Removing: /var/run/dpdk/spdk_pid60728 00:31:38.336 Removing: /var/run/dpdk/spdk_pid60821 00:31:38.336 Removing: /var/run/dpdk/spdk_pid60930 00:31:38.336 Removing: /var/run/dpdk/spdk_pid60983 00:31:38.336 Removing: /var/run/dpdk/spdk_pid61014 00:31:38.336 Removing: /var/run/dpdk/spdk_pid61098 00:31:38.336 Removing: /var/run/dpdk/spdk_pid61722 00:31:38.336 Removing: /var/run/dpdk/spdk_pid61759 00:31:38.593 Removing: /var/run/dpdk/spdk_pid62262 00:31:38.593 Removing: /var/run/dpdk/spdk_pid62355 00:31:38.593 Removing: /var/run/dpdk/spdk_pid62470 00:31:38.593 Removing: /var/run/dpdk/spdk_pid62518 00:31:38.593 Removing: /var/run/dpdk/spdk_pid62549 00:31:38.593 Removing: /var/run/dpdk/spdk_pid62580 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64430 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64562 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64566 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64583 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64622 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64626 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64638 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64683 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64687 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64699 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64745 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64749 00:31:38.593 Removing: /var/run/dpdk/spdk_pid64761 00:31:38.593 Removing: /var/run/dpdk/spdk_pid66142 00:31:38.593 Removing: /var/run/dpdk/spdk_pid66239 00:31:38.593 Removing: /var/run/dpdk/spdk_pid67644 00:31:38.593 Removing: /var/run/dpdk/spdk_pid69400 00:31:38.593 Removing: /var/run/dpdk/spdk_pid69474 00:31:38.593 Removing: /var/run/dpdk/spdk_pid69549 00:31:38.593 Removing: /var/run/dpdk/spdk_pid69653 00:31:38.593 Removing: /var/run/dpdk/spdk_pid69745 00:31:38.593 Removing: /var/run/dpdk/spdk_pid69839 00:31:38.593 Removing: /var/run/dpdk/spdk_pid69912 00:31:38.593 Removing: /var/run/dpdk/spdk_pid69983 00:31:38.593 Removing: /var/run/dpdk/spdk_pid70096 00:31:38.593 Removing: /var/run/dpdk/spdk_pid70183 00:31:38.593 Removing: /var/run/dpdk/spdk_pid70283 00:31:38.593 Removing: /var/run/dpdk/spdk_pid70346 00:31:38.593 Removing: /var/run/dpdk/spdk_pid70427 00:31:38.593 Removing: /var/run/dpdk/spdk_pid70531 00:31:38.593 Removing: /var/run/dpdk/spdk_pid70618 00:31:38.593 Removing: /var/run/dpdk/spdk_pid70714 00:31:38.593 Removing: /var/run/dpdk/spdk_pid70788 00:31:38.593 Removing: /var/run/dpdk/spdk_pid70864 00:31:38.593 Removing: /var/run/dpdk/spdk_pid70974 00:31:38.593 Removing: /var/run/dpdk/spdk_pid71060 00:31:38.593 Removing: /var/run/dpdk/spdk_pid71156 00:31:38.593 Removing: /var/run/dpdk/spdk_pid71230 00:31:38.593 Removing: /var/run/dpdk/spdk_pid71305 00:31:38.593 Removing: /var/run/dpdk/spdk_pid71384 00:31:38.593 Removing: /var/run/dpdk/spdk_pid71454 00:31:38.593 Removing: /var/run/dpdk/spdk_pid71557 00:31:38.593 Removing: /var/run/dpdk/spdk_pid71652 00:31:38.593 Removing: /var/run/dpdk/spdk_pid71748 00:31:38.593 Removing: /var/run/dpdk/spdk_pid71821 00:31:38.593 Removing: /var/run/dpdk/spdk_pid71891 00:31:38.593 Removing: /var/run/dpdk/spdk_pid71966 00:31:38.593 Removing: /var/run/dpdk/spdk_pid72046 00:31:38.593 Removing: /var/run/dpdk/spdk_pid72151 00:31:38.593 Removing: /var/run/dpdk/spdk_pid72242 00:31:38.593 Removing: /var/run/dpdk/spdk_pid72386 00:31:38.593 Removing: /var/run/dpdk/spdk_pid72674 00:31:38.593 Removing: /var/run/dpdk/spdk_pid72706 00:31:38.593 Removing: /var/run/dpdk/spdk_pid73151 00:31:38.593 Removing: /var/run/dpdk/spdk_pid73342 00:31:38.593 Removing: /var/run/dpdk/spdk_pid73436 00:31:38.593 Removing: /var/run/dpdk/spdk_pid73547 00:31:38.593 Removing: /var/run/dpdk/spdk_pid73599 00:31:38.593 Removing: /var/run/dpdk/spdk_pid73620 00:31:38.593 Removing: /var/run/dpdk/spdk_pid73928 00:31:38.593 Removing: /var/run/dpdk/spdk_pid73988 00:31:38.593 Removing: /var/run/dpdk/spdk_pid74057 00:31:38.593 Removing: /var/run/dpdk/spdk_pid74450 00:31:38.593 Removing: /var/run/dpdk/spdk_pid74598 00:31:38.593 Removing: /var/run/dpdk/spdk_pid75403 00:31:38.593 Removing: /var/run/dpdk/spdk_pid75530 00:31:38.593 Removing: /var/run/dpdk/spdk_pid75705 00:31:38.593 Removing: /var/run/dpdk/spdk_pid75802 00:31:38.593 Removing: /var/run/dpdk/spdk_pid76088 00:31:38.593 Removing: /var/run/dpdk/spdk_pid76336 00:31:38.593 Removing: /var/run/dpdk/spdk_pid76682 00:31:38.593 Removing: /var/run/dpdk/spdk_pid76881 00:31:38.593 Removing: /var/run/dpdk/spdk_pid77088 00:31:38.593 Removing: /var/run/dpdk/spdk_pid77145 00:31:38.593 Removing: /var/run/dpdk/spdk_pid77411 00:31:38.593 Removing: /var/run/dpdk/spdk_pid77450 00:31:38.593 Removing: /var/run/dpdk/spdk_pid77504 00:31:38.593 Removing: /var/run/dpdk/spdk_pid77875 00:31:38.593 Removing: /var/run/dpdk/spdk_pid78100 00:31:38.593 Removing: /var/run/dpdk/spdk_pid78363 00:31:38.593 Removing: /var/run/dpdk/spdk_pid78650 00:31:38.593 Removing: /var/run/dpdk/spdk_pid78937 00:31:38.593 Removing: /var/run/dpdk/spdk_pid79366 00:31:38.593 Removing: /var/run/dpdk/spdk_pid79519 00:31:38.594 Removing: /var/run/dpdk/spdk_pid79618 00:31:38.594 Removing: /var/run/dpdk/spdk_pid80003 00:31:38.594 Removing: /var/run/dpdk/spdk_pid80067 00:31:38.594 Removing: /var/run/dpdk/spdk_pid80382 00:31:38.594 Removing: /var/run/dpdk/spdk_pid80893 00:31:38.594 Removing: /var/run/dpdk/spdk_pid81924 00:31:38.594 Removing: /var/run/dpdk/spdk_pid82070 00:31:38.594 Removing: /var/run/dpdk/spdk_pid82116 00:31:38.594 Removing: /var/run/dpdk/spdk_pid82196 00:31:38.594 Removing: /var/run/dpdk/spdk_pid82253 00:31:38.594 Removing: /var/run/dpdk/spdk_pid82317 00:31:38.594 Removing: /var/run/dpdk/spdk_pid82504 00:31:38.594 Removing: /var/run/dpdk/spdk_pid82571 00:31:38.594 Removing: /var/run/dpdk/spdk_pid82632 00:31:38.594 Removing: /var/run/dpdk/spdk_pid82758 00:31:38.594 Removing: /var/run/dpdk/spdk_pid82791 00:31:38.594 Removing: /var/run/dpdk/spdk_pid82919 00:31:38.594 Removing: /var/run/dpdk/spdk_pid83028 00:31:38.594 Clean 00:31:38.851 19:50:05 -- common/autotest_common.sh@1453 -- # return 0 00:31:38.851 19:50:05 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:38.851 19:50:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:38.851 19:50:05 -- common/autotest_common.sh@10 -- # set +x 00:31:38.851 19:50:05 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:38.851 19:50:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:38.851 19:50:05 -- common/autotest_common.sh@10 -- # set +x 00:31:38.851 19:50:05 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:38.851 19:50:05 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:38.851 19:50:05 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:38.851 19:50:05 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:38.851 19:50:05 -- spdk/autotest.sh@398 -- # hostname 00:31:38.851 19:50:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:38.851 geninfo: WARNING: invalid characters removed from testname! 00:32:05.416 19:50:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:05.416 19:50:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:07.945 19:50:34 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:09.314 19:50:36 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:11.840 19:50:38 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:13.738 19:50:40 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:16.328 19:50:43 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:16.328 19:50:43 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:16.328 19:50:43 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:16.328 19:50:43 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:16.328 19:50:43 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:16.328 19:50:43 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:16.328 + [[ -n 5029 ]] 00:32:16.328 + sudo kill 5029 00:32:16.336 [Pipeline] } 00:32:16.352 [Pipeline] // timeout 00:32:16.358 [Pipeline] } 00:32:16.373 [Pipeline] // stage 00:32:16.378 [Pipeline] } 00:32:16.394 [Pipeline] // catchError 00:32:16.403 [Pipeline] stage 00:32:16.406 [Pipeline] { (Stop VM) 00:32:16.421 [Pipeline] sh 00:32:16.698 + vagrant halt 00:32:19.221 ==> default: Halting domain... 00:32:22.536 [Pipeline] sh 00:32:22.814 + vagrant destroy -f 00:32:25.344 ==> default: Removing domain... 00:32:25.920 [Pipeline] sh 00:32:26.214 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:32:26.223 [Pipeline] } 00:32:26.239 [Pipeline] // stage 00:32:26.246 [Pipeline] } 00:32:26.261 [Pipeline] // dir 00:32:26.268 [Pipeline] } 00:32:26.282 [Pipeline] // wrap 00:32:26.290 [Pipeline] } 00:32:26.303 [Pipeline] // catchError 00:32:26.312 [Pipeline] stage 00:32:26.314 [Pipeline] { (Epilogue) 00:32:26.327 [Pipeline] sh 00:32:26.606 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:33.238 [Pipeline] catchError 00:32:33.241 [Pipeline] { 00:32:33.257 [Pipeline] sh 00:32:33.542 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:33.542 Artifacts sizes are good 00:32:33.553 [Pipeline] } 00:32:33.569 [Pipeline] // catchError 00:32:33.583 [Pipeline] archiveArtifacts 00:32:33.591 Archiving artifacts 00:32:33.727 [Pipeline] cleanWs 00:32:33.754 [WS-CLEANUP] Deleting project workspace... 00:32:33.754 [WS-CLEANUP] Deferred wipeout is used... 00:32:33.799 [WS-CLEANUP] done 00:32:33.801 [Pipeline] } 00:32:33.819 [Pipeline] // stage 00:32:33.825 [Pipeline] } 00:32:33.838 [Pipeline] // node 00:32:33.844 [Pipeline] End of Pipeline 00:32:33.895 Finished: SUCCESS