00:00:00.001 Started by upstream project "autotest-per-patch" build number 132709 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.127 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.128 The recommended git tool is: git 00:00:00.128 using credential 00000000-0000-0000-0000-000000000002 00:00:00.130 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.166 Fetching changes from the remote Git repository 00:00:00.168 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.237 Using shallow fetch with depth 1 00:00:00.237 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.237 > git --version # timeout=10 00:00:00.266 > git --version # 'git version 2.39.2' 00:00:00.266 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.285 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.285 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.346 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.357 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.369 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.369 > git config core.sparsecheckout # timeout=10 00:00:07.379 > git read-tree -mu HEAD # timeout=10 00:00:07.394 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.421 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.421 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.542 [Pipeline] Start of Pipeline 00:00:07.556 [Pipeline] library 00:00:07.558 Loading library shm_lib@master 00:00:07.558 Library shm_lib@master is cached. Copying from home. 00:00:07.575 [Pipeline] node 00:00:07.599 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.601 [Pipeline] { 00:00:07.611 [Pipeline] catchError 00:00:07.612 [Pipeline] { 00:00:07.622 [Pipeline] wrap 00:00:07.631 [Pipeline] { 00:00:07.639 [Pipeline] stage 00:00:07.641 [Pipeline] { (Prologue) 00:00:07.660 [Pipeline] echo 00:00:07.661 Node: VM-host-SM38 00:00:07.667 [Pipeline] cleanWs 00:00:07.678 [WS-CLEANUP] Deleting project workspace... 00:00:07.678 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.685 [WS-CLEANUP] done 00:00:07.900 [Pipeline] setCustomBuildProperty 00:00:07.967 [Pipeline] httpRequest 00:00:08.403 [Pipeline] echo 00:00:08.405 Sorcerer 10.211.164.20 is alive 00:00:08.414 [Pipeline] retry 00:00:08.416 [Pipeline] { 00:00:08.429 [Pipeline] httpRequest 00:00:08.433 HttpMethod: GET 00:00:08.433 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.433 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.435 Response Code: HTTP/1.1 200 OK 00:00:08.436 Success: Status code 200 is in the accepted range: 200,404 00:00:08.436 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.559 [Pipeline] } 00:00:09.573 [Pipeline] // retry 00:00:09.580 [Pipeline] sh 00:00:09.863 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.875 [Pipeline] httpRequest 00:00:10.724 [Pipeline] echo 00:00:10.725 Sorcerer 10.211.164.20 is alive 00:00:10.733 [Pipeline] retry 00:00:10.734 [Pipeline] { 00:00:10.745 [Pipeline] httpRequest 00:00:10.750 HttpMethod: GET 00:00:10.750 URL: http://10.211.164.20/packages/spdk_02b805e62d832895f152305e70a4a85679f27e67.tar.gz 00:00:10.750 Sending request to url: http://10.211.164.20/packages/spdk_02b805e62d832895f152305e70a4a85679f27e67.tar.gz 00:00:10.772 Response Code: HTTP/1.1 200 OK 00:00:10.772 Success: Status code 200 is in the accepted range: 200,404 00:00:10.772 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_02b805e62d832895f152305e70a4a85679f27e67.tar.gz 00:02:14.459 [Pipeline] } 00:02:14.477 [Pipeline] // retry 00:02:14.485 [Pipeline] sh 00:02:14.768 + tar --no-same-owner -xf spdk_02b805e62d832895f152305e70a4a85679f27e67.tar.gz 00:02:18.087 [Pipeline] sh 00:02:18.371 + git -C spdk log --oneline -n5 00:02:18.371 02b805e62 lib/reduce: Unmap backing dev blocks 00:02:18.371 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:02:18.371 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:02:18.371 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:18.371 e2dfdf06c accel/mlx5: Register post_poller handler 00:02:18.393 [Pipeline] writeFile 00:02:18.409 [Pipeline] sh 00:02:18.696 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:18.709 [Pipeline] sh 00:02:18.995 + cat autorun-spdk.conf 00:02:18.995 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.995 SPDK_TEST_NVME=1 00:02:18.995 SPDK_TEST_FTL=1 00:02:18.995 SPDK_TEST_ISAL=1 00:02:18.995 SPDK_RUN_ASAN=1 00:02:18.995 SPDK_RUN_UBSAN=1 00:02:18.995 SPDK_TEST_XNVME=1 00:02:18.995 SPDK_TEST_NVME_FDP=1 00:02:18.995 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.004 RUN_NIGHTLY=0 00:02:19.006 [Pipeline] } 00:02:19.021 [Pipeline] // stage 00:02:19.037 [Pipeline] stage 00:02:19.040 [Pipeline] { (Run VM) 00:02:19.052 [Pipeline] sh 00:02:19.412 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:19.412 + echo 'Start stage prepare_nvme.sh' 00:02:19.412 Start stage prepare_nvme.sh 00:02:19.412 + [[ -n 4 ]] 00:02:19.412 + disk_prefix=ex4 00:02:19.412 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:19.412 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:19.412 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:19.412 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.412 ++ SPDK_TEST_NVME=1 00:02:19.412 ++ SPDK_TEST_FTL=1 00:02:19.412 ++ SPDK_TEST_ISAL=1 00:02:19.412 ++ SPDK_RUN_ASAN=1 00:02:19.412 ++ SPDK_RUN_UBSAN=1 00:02:19.412 ++ SPDK_TEST_XNVME=1 00:02:19.412 ++ SPDK_TEST_NVME_FDP=1 00:02:19.412 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.412 ++ RUN_NIGHTLY=0 00:02:19.412 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:19.412 + nvme_files=() 00:02:19.412 + declare -A nvme_files 00:02:19.412 + backend_dir=/var/lib/libvirt/images/backends 00:02:19.412 + nvme_files['nvme.img']=5G 00:02:19.412 + nvme_files['nvme-cmb.img']=5G 00:02:19.412 + nvme_files['nvme-multi0.img']=4G 00:02:19.412 + nvme_files['nvme-multi1.img']=4G 00:02:19.412 + nvme_files['nvme-multi2.img']=4G 00:02:19.412 + nvme_files['nvme-openstack.img']=8G 00:02:19.412 + nvme_files['nvme-zns.img']=5G 00:02:19.412 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:19.412 + (( SPDK_TEST_FTL == 1 )) 00:02:19.412 + nvme_files["nvme-ftl.img"]=6G 00:02:19.412 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:19.413 + nvme_files["nvme-fdp.img"]=1G 00:02:19.413 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:19.413 + for nvme in "${!nvme_files[@]}" 00:02:19.413 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:02:19.413 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:19.413 + for nvme in "${!nvme_files[@]}" 00:02:19.413 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:02:19.674 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:19.674 + for nvme in "${!nvme_files[@]}" 00:02:19.674 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:02:19.674 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:19.674 + for nvme in "${!nvme_files[@]}" 00:02:19.674 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:02:19.674 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:19.674 + for nvme in "${!nvme_files[@]}" 00:02:19.674 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:02:19.674 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:19.674 + for nvme in "${!nvme_files[@]}" 00:02:19.674 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:02:19.674 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:19.674 + for nvme in "${!nvme_files[@]}" 00:02:19.674 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:02:19.674 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:19.674 + for nvme in "${!nvme_files[@]}" 00:02:19.674 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:02:19.935 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:19.935 + for nvme in "${!nvme_files[@]}" 00:02:19.935 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:02:19.935 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:19.935 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:02:19.935 + echo 'End stage prepare_nvme.sh' 00:02:19.935 End stage prepare_nvme.sh 00:02:19.949 [Pipeline] sh 00:02:20.237 + DISTRO=fedora39 00:02:20.237 + CPUS=10 00:02:20.237 + RAM=12288 00:02:20.237 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:20.237 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:20.237 00:02:20.237 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:20.237 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:20.237 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:20.237 HELP=0 00:02:20.237 DRY_RUN=0 00:02:20.237 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:02:20.237 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:20.237 NVME_AUTO_CREATE=0 00:02:20.237 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:02:20.237 NVME_CMB=,,,, 00:02:20.237 NVME_PMR=,,,, 00:02:20.237 NVME_ZNS=,,,, 00:02:20.237 NVME_MS=true,,,, 00:02:20.237 NVME_FDP=,,,on, 00:02:20.237 SPDK_VAGRANT_DISTRO=fedora39 00:02:20.237 SPDK_VAGRANT_VMCPU=10 00:02:20.237 SPDK_VAGRANT_VMRAM=12288 00:02:20.237 SPDK_VAGRANT_PROVIDER=libvirt 00:02:20.237 SPDK_VAGRANT_HTTP_PROXY= 00:02:20.237 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:20.237 SPDK_OPENSTACK_NETWORK=0 00:02:20.237 VAGRANT_PACKAGE_BOX=0 00:02:20.237 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:20.237 FORCE_DISTRO=true 00:02:20.237 VAGRANT_BOX_VERSION= 00:02:20.237 EXTRA_VAGRANTFILES= 00:02:20.237 NIC_MODEL=e1000 00:02:20.237 00:02:20.237 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:20.237 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:22.764 Bringing machine 'default' up with 'libvirt' provider... 00:02:23.370 ==> default: Creating image (snapshot of base box volume). 00:02:23.630 ==> default: Creating domain with the following settings... 00:02:23.630 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733457190_1ebe2a9b87700979ca18 00:02:23.630 ==> default: -- Domain type: kvm 00:02:23.630 ==> default: -- Cpus: 10 00:02:23.630 ==> default: -- Feature: acpi 00:02:23.630 ==> default: -- Feature: apic 00:02:23.630 ==> default: -- Feature: pae 00:02:23.630 ==> default: -- Memory: 12288M 00:02:23.630 ==> default: -- Memory Backing: hugepages: 00:02:23.630 ==> default: -- Management MAC: 00:02:23.630 ==> default: -- Loader: 00:02:23.630 ==> default: -- Nvram: 00:02:23.630 ==> default: -- Base box: spdk/fedora39 00:02:23.630 ==> default: -- Storage pool: default 00:02:23.630 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733457190_1ebe2a9b87700979ca18.img (20G) 00:02:23.630 ==> default: -- Volume Cache: default 00:02:23.630 ==> default: -- Kernel: 00:02:23.630 ==> default: -- Initrd: 00:02:23.630 ==> default: -- Graphics Type: vnc 00:02:23.630 ==> default: -- Graphics Port: -1 00:02:23.630 ==> default: -- Graphics IP: 127.0.0.1 00:02:23.630 ==> default: -- Graphics Password: Not defined 00:02:23.630 ==> default: -- Video Type: cirrus 00:02:23.630 ==> default: -- Video VRAM: 9216 00:02:23.630 ==> default: -- Sound Type: 00:02:23.630 ==> default: -- Keymap: en-us 00:02:23.630 ==> default: -- TPM Path: 00:02:23.630 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:23.630 ==> default: -- Command line args: 00:02:23.630 ==> default: -> value=-device, 00:02:23.631 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:23.631 ==> default: -> value=-drive, 00:02:23.631 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:23.631 ==> default: -> value=-device, 00:02:23.631 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:23.631 ==> default: -> value=-device, 00:02:23.631 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:23.631 ==> default: -> value=-drive, 00:02:23.631 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:02:23.631 ==> default: -> value=-device, 00:02:23.631 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:23.631 ==> default: -> value=-device, 00:02:23.631 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:23.631 ==> default: -> value=-drive, 00:02:23.631 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:23.631 ==> default: -> value=-device, 00:02:23.631 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:23.631 ==> default: -> value=-drive, 00:02:23.631 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:23.631 ==> default: -> value=-device, 00:02:23.631 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:23.631 ==> default: -> value=-drive, 00:02:23.631 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:23.631 ==> default: -> value=-device, 00:02:23.631 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:23.631 ==> default: -> value=-device, 00:02:23.631 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:23.631 ==> default: -> value=-device, 00:02:23.631 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:23.631 ==> default: -> value=-drive, 00:02:23.631 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:23.631 ==> default: -> value=-device, 00:02:23.631 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:23.631 ==> default: Creating shared folders metadata... 00:02:23.631 ==> default: Starting domain. 00:02:25.532 ==> default: Waiting for domain to get an IP address... 00:02:43.636 ==> default: Waiting for SSH to become available... 00:02:43.636 ==> default: Configuring and enabling network interfaces... 00:02:47.844 default: SSH address: 192.168.121.85:22 00:02:47.844 default: SSH username: vagrant 00:02:47.844 default: SSH auth method: private key 00:02:49.749 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:57.939 ==> default: Mounting SSHFS shared folder... 00:02:59.310 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:59.310 ==> default: Checking Mount.. 00:03:00.241 ==> default: Folder Successfully Mounted! 00:03:00.241 00:03:00.241 SUCCESS! 00:03:00.241 00:03:00.241 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:00.241 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:00.241 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:00.241 00:03:00.249 [Pipeline] } 00:03:00.265 [Pipeline] // stage 00:03:00.275 [Pipeline] dir 00:03:00.276 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:03:00.277 [Pipeline] { 00:03:00.292 [Pipeline] catchError 00:03:00.294 [Pipeline] { 00:03:00.309 [Pipeline] sh 00:03:00.585 + vagrant ssh-config --host vagrant 00:03:00.585 + sed -ne '/^Host/,$p' 00:03:00.585 + tee ssh_conf 00:03:03.107 Host vagrant 00:03:03.107 HostName 192.168.121.85 00:03:03.107 User vagrant 00:03:03.107 Port 22 00:03:03.107 UserKnownHostsFile /dev/null 00:03:03.107 StrictHostKeyChecking no 00:03:03.107 PasswordAuthentication no 00:03:03.107 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:03.107 IdentitiesOnly yes 00:03:03.107 LogLevel FATAL 00:03:03.107 ForwardAgent yes 00:03:03.107 ForwardX11 yes 00:03:03.107 00:03:03.119 [Pipeline] withEnv 00:03:03.122 [Pipeline] { 00:03:03.136 [Pipeline] sh 00:03:03.414 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:03:03.414 source /etc/os-release 00:03:03.414 [[ -e /image.version ]] && img=$(< /image.version) 00:03:03.414 # Minimal, systemd-like check. 00:03:03.414 if [[ -e /.dockerenv ]]; then 00:03:03.414 # Clear garbage from the node'\''s name: 00:03:03.414 # agt-er_autotest_547-896 -> autotest_547-896 00:03:03.414 # $HOSTNAME is the actual container id 00:03:03.414 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:03.414 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:03.414 # We can assume this is a mount from a host where container is running, 00:03:03.414 # so fetch its hostname to easily identify the target swarm worker. 00:03:03.414 container="$(< /etc/hostname) ($agent)" 00:03:03.414 else 00:03:03.414 # Fallback 00:03:03.414 container=$agent 00:03:03.414 fi 00:03:03.414 fi 00:03:03.414 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:03.414 ' 00:03:03.681 [Pipeline] } 00:03:03.699 [Pipeline] // withEnv 00:03:03.707 [Pipeline] setCustomBuildProperty 00:03:03.723 [Pipeline] stage 00:03:03.725 [Pipeline] { (Tests) 00:03:03.743 [Pipeline] sh 00:03:04.025 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:04.297 [Pipeline] sh 00:03:04.576 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:04.849 [Pipeline] timeout 00:03:04.850 Timeout set to expire in 50 min 00:03:04.853 [Pipeline] { 00:03:04.872 [Pipeline] sh 00:03:05.151 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:03:05.718 HEAD is now at 02b805e62 lib/reduce: Unmap backing dev blocks 00:03:05.730 [Pipeline] sh 00:03:06.008 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:03:06.280 [Pipeline] sh 00:03:06.558 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:06.834 [Pipeline] sh 00:03:07.117 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:03:07.375 ++ readlink -f spdk_repo 00:03:07.375 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:07.375 + [[ -n /home/vagrant/spdk_repo ]] 00:03:07.375 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:07.375 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:07.375 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:07.375 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:07.375 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:07.375 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:07.375 + cd /home/vagrant/spdk_repo 00:03:07.375 + source /etc/os-release 00:03:07.375 ++ NAME='Fedora Linux' 00:03:07.375 ++ VERSION='39 (Cloud Edition)' 00:03:07.375 ++ ID=fedora 00:03:07.375 ++ VERSION_ID=39 00:03:07.375 ++ VERSION_CODENAME= 00:03:07.375 ++ PLATFORM_ID=platform:f39 00:03:07.375 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:07.375 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:07.375 ++ LOGO=fedora-logo-icon 00:03:07.375 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:07.375 ++ HOME_URL=https://fedoraproject.org/ 00:03:07.375 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:07.375 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:07.375 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:07.375 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:07.375 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:07.375 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:07.375 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:07.375 ++ SUPPORT_END=2024-11-12 00:03:07.375 ++ VARIANT='Cloud Edition' 00:03:07.375 ++ VARIANT_ID=cloud 00:03:07.375 + uname -a 00:03:07.375 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:07.375 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:07.633 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:07.890 Hugepages 00:03:07.890 node hugesize free / total 00:03:07.890 node0 1048576kB 0 / 0 00:03:07.890 node0 2048kB 0 / 0 00:03:07.890 00:03:07.890 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:07.890 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:07.890 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:07.890 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:07.890 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:07.890 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:07.890 + rm -f /tmp/spdk-ld-path 00:03:07.890 + source autorun-spdk.conf 00:03:07.890 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:07.890 ++ SPDK_TEST_NVME=1 00:03:07.890 ++ SPDK_TEST_FTL=1 00:03:07.890 ++ SPDK_TEST_ISAL=1 00:03:07.890 ++ SPDK_RUN_ASAN=1 00:03:07.890 ++ SPDK_RUN_UBSAN=1 00:03:07.890 ++ SPDK_TEST_XNVME=1 00:03:07.890 ++ SPDK_TEST_NVME_FDP=1 00:03:07.890 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:07.890 ++ RUN_NIGHTLY=0 00:03:07.890 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:07.890 + [[ -n '' ]] 00:03:07.890 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:07.890 + for M in /var/spdk/build-*-manifest.txt 00:03:07.890 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:07.890 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:08.147 + for M in /var/spdk/build-*-manifest.txt 00:03:08.147 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:08.147 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:08.147 + for M in /var/spdk/build-*-manifest.txt 00:03:08.147 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:08.147 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:08.147 ++ uname 00:03:08.147 + [[ Linux == \L\i\n\u\x ]] 00:03:08.147 + sudo dmesg -T 00:03:08.147 + sudo dmesg --clear 00:03:08.147 + dmesg_pid=5028 00:03:08.147 + [[ Fedora Linux == FreeBSD ]] 00:03:08.147 + sudo dmesg -Tw 00:03:08.147 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:08.147 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:08.147 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:08.147 + [[ -x /usr/src/fio-static/fio ]] 00:03:08.147 + export FIO_BIN=/usr/src/fio-static/fio 00:03:08.147 + FIO_BIN=/usr/src/fio-static/fio 00:03:08.147 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:08.147 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:08.147 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:08.147 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:08.147 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:08.147 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:08.147 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:08.147 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:08.147 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:08.147 03:53:55 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:08.147 03:53:55 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:08.147 03:53:55 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:08.147 03:53:55 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:08.147 03:53:55 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:08.147 03:53:55 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:08.147 03:53:55 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:08.147 03:53:55 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:08.147 03:53:55 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:08.147 03:53:55 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:08.147 03:53:55 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:08.147 03:53:55 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:03:08.147 03:53:55 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:08.147 03:53:55 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:08.147 03:53:55 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:08.147 03:53:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:08.147 03:53:55 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:08.147 03:53:55 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:08.147 03:53:55 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:08.147 03:53:55 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:08.147 03:53:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.404 03:53:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.405 03:53:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.405 03:53:55 -- paths/export.sh@5 -- $ export PATH 00:03:08.405 03:53:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.405 03:53:55 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:08.405 03:53:55 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:08.405 03:53:55 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733457235.XXXXXX 00:03:08.405 03:53:55 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733457235.IQ9RGu 00:03:08.405 03:53:55 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:08.405 03:53:55 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:08.405 03:53:55 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:08.405 03:53:55 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:08.405 03:53:55 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:08.405 03:53:55 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:08.405 03:53:55 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:08.405 03:53:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.405 03:53:55 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:03:08.405 03:53:55 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:08.405 03:53:55 -- pm/common@17 -- $ local monitor 00:03:08.405 03:53:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.405 03:53:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.405 03:53:55 -- pm/common@25 -- $ sleep 1 00:03:08.405 03:53:55 -- pm/common@21 -- $ date +%s 00:03:08.405 03:53:55 -- pm/common@21 -- $ date +%s 00:03:08.405 03:53:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733457235 00:03:08.405 03:53:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733457235 00:03:08.405 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733457235_collect-cpu-load.pm.log 00:03:08.405 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733457235_collect-vmstat.pm.log 00:03:09.340 03:53:56 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:09.340 03:53:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:09.340 03:53:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:09.340 03:53:56 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:09.340 03:53:56 -- spdk/autobuild.sh@16 -- $ date -u 00:03:09.340 Fri Dec 6 03:53:56 AM UTC 2024 00:03:09.340 03:53:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:09.340 v25.01-pre-304-g02b805e62 00:03:09.340 03:53:56 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:09.340 03:53:56 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:09.340 03:53:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:09.340 03:53:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:09.340 03:53:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:09.340 ************************************ 00:03:09.340 START TEST asan 00:03:09.340 ************************************ 00:03:09.340 using asan 00:03:09.341 03:53:56 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:09.341 00:03:09.341 real 0m0.000s 00:03:09.341 user 0m0.000s 00:03:09.341 sys 0m0.000s 00:03:09.341 03:53:56 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:09.341 03:53:56 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:09.341 ************************************ 00:03:09.341 END TEST asan 00:03:09.341 ************************************ 00:03:09.341 03:53:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:09.341 03:53:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:09.341 03:53:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:09.341 03:53:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:09.341 03:53:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:09.341 ************************************ 00:03:09.341 START TEST ubsan 00:03:09.341 ************************************ 00:03:09.341 using ubsan 00:03:09.341 03:53:56 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:09.341 00:03:09.341 real 0m0.000s 00:03:09.341 user 0m0.000s 00:03:09.341 sys 0m0.000s 00:03:09.341 03:53:56 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:09.341 03:53:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:09.341 ************************************ 00:03:09.341 END TEST ubsan 00:03:09.341 ************************************ 00:03:09.341 03:53:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:09.341 03:53:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:09.341 03:53:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:09.341 03:53:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:09.341 03:53:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:09.341 03:53:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:09.341 03:53:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:09.341 03:53:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:09.341 03:53:56 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:03:09.598 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:09.598 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:09.855 Using 'verbs' RDMA provider 00:03:20.778 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:32.976 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:32.976 Creating mk/config.mk...done. 00:03:32.976 Creating mk/cc.flags.mk...done. 00:03:32.976 Type 'make' to build. 00:03:32.976 03:54:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:32.976 03:54:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:32.976 03:54:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:32.976 03:54:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.976 ************************************ 00:03:32.976 START TEST make 00:03:32.976 ************************************ 00:03:32.976 03:54:18 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:32.976 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:32.976 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:32.976 meson setup builddir \ 00:03:32.976 -Dwith-libaio=enabled \ 00:03:32.976 -Dwith-liburing=enabled \ 00:03:32.976 -Dwith-libvfn=disabled \ 00:03:32.976 -Dwith-spdk=disabled \ 00:03:32.976 -Dexamples=false \ 00:03:32.976 -Dtests=false \ 00:03:32.976 -Dtools=false && \ 00:03:32.976 meson compile -C builddir && \ 00:03:32.976 cd -) 00:03:32.976 make[1]: Nothing to be done for 'all'. 00:03:33.546 The Meson build system 00:03:33.546 Version: 1.5.0 00:03:33.546 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:33.546 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:33.546 Build type: native build 00:03:33.546 Project name: xnvme 00:03:33.546 Project version: 0.7.5 00:03:33.546 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:33.546 C linker for the host machine: cc ld.bfd 2.40-14 00:03:33.546 Host machine cpu family: x86_64 00:03:33.546 Host machine cpu: x86_64 00:03:33.546 Message: host_machine.system: linux 00:03:33.546 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:33.546 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:33.546 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:33.546 Run-time dependency threads found: YES 00:03:33.546 Has header "setupapi.h" : NO 00:03:33.546 Has header "linux/blkzoned.h" : YES 00:03:33.546 Has header "linux/blkzoned.h" : YES (cached) 00:03:33.546 Has header "libaio.h" : YES 00:03:33.546 Library aio found: YES 00:03:33.546 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:33.546 Run-time dependency liburing found: YES 2.2 00:03:33.546 Dependency libvfn skipped: feature with-libvfn disabled 00:03:33.546 Found CMake: /usr/bin/cmake (3.27.7) 00:03:33.546 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:33.546 Subproject spdk : skipped: feature with-spdk disabled 00:03:33.546 Run-time dependency appleframeworks found: NO (tried framework) 00:03:33.546 Run-time dependency appleframeworks found: NO (tried framework) 00:03:33.546 Library rt found: YES 00:03:33.546 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:33.546 Configuring xnvme_config.h using configuration 00:03:33.546 Configuring xnvme.spec using configuration 00:03:33.546 Run-time dependency bash-completion found: YES 2.11 00:03:33.546 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:33.546 Program cp found: YES (/usr/bin/cp) 00:03:33.546 Build targets in project: 3 00:03:33.546 00:03:33.546 xnvme 0.7.5 00:03:33.546 00:03:33.546 Subprojects 00:03:33.546 spdk : NO Feature 'with-spdk' disabled 00:03:33.546 00:03:33.546 User defined options 00:03:33.546 examples : false 00:03:33.546 tests : false 00:03:33.546 tools : false 00:03:33.546 with-libaio : enabled 00:03:33.546 with-liburing: enabled 00:03:33.546 with-libvfn : disabled 00:03:33.546 with-spdk : disabled 00:03:33.546 00:03:33.546 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:34.111 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:34.111 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:34.111 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:34.111 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:34.111 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:34.111 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:34.111 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:34.111 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:34.111 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:34.111 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:34.111 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:34.111 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:34.111 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:34.111 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:34.111 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:34.369 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:34.369 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:34.369 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:34.369 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:34.369 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:34.369 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:34.369 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:34.369 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:34.369 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:34.369 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:34.370 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:34.370 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:34.370 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:34.370 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:34.370 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:34.370 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:34.370 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:34.370 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:34.370 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:34.370 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:34.370 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:34.370 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:34.370 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:34.370 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:34.370 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:34.370 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:34.370 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:34.370 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:34.370 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:34.370 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:34.370 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:34.370 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:34.370 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:34.370 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:34.370 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:34.370 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:34.370 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:34.370 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:34.370 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:34.628 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:34.628 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:34.628 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:34.628 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:34.628 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:34.628 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:34.628 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:34.628 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:34.628 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:34.628 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:34.628 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:34.628 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:34.628 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:34.628 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:34.628 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:34.628 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:34.628 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:34.886 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:34.886 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:34.886 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:35.145 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:35.145 [75/76] Linking static target lib/libxnvme.a 00:03:35.145 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:35.145 INFO: autodetecting backend as ninja 00:03:35.145 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:35.145 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:41.840 The Meson build system 00:03:41.840 Version: 1.5.0 00:03:41.840 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:41.840 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:41.840 Build type: native build 00:03:41.840 Program cat found: YES (/usr/bin/cat) 00:03:41.840 Project name: DPDK 00:03:41.841 Project version: 24.03.0 00:03:41.841 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:41.841 C linker for the host machine: cc ld.bfd 2.40-14 00:03:41.841 Host machine cpu family: x86_64 00:03:41.841 Host machine cpu: x86_64 00:03:41.841 Message: ## Building in Developer Mode ## 00:03:41.841 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:41.841 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:41.841 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:41.841 Program python3 found: YES (/usr/bin/python3) 00:03:41.841 Program cat found: YES (/usr/bin/cat) 00:03:41.841 Compiler for C supports arguments -march=native: YES 00:03:41.841 Checking for size of "void *" : 8 00:03:41.841 Checking for size of "void *" : 8 (cached) 00:03:41.841 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:41.841 Library m found: YES 00:03:41.841 Library numa found: YES 00:03:41.841 Has header "numaif.h" : YES 00:03:41.841 Library fdt found: NO 00:03:41.841 Library execinfo found: NO 00:03:41.841 Has header "execinfo.h" : YES 00:03:41.841 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:41.841 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:41.841 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:41.841 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:41.841 Run-time dependency openssl found: YES 3.1.1 00:03:41.841 Run-time dependency libpcap found: YES 1.10.4 00:03:41.841 Has header "pcap.h" with dependency libpcap: YES 00:03:41.841 Compiler for C supports arguments -Wcast-qual: YES 00:03:41.841 Compiler for C supports arguments -Wdeprecated: YES 00:03:41.841 Compiler for C supports arguments -Wformat: YES 00:03:41.841 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:41.841 Compiler for C supports arguments -Wformat-security: NO 00:03:41.841 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:41.841 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:41.841 Compiler for C supports arguments -Wnested-externs: YES 00:03:41.841 Compiler for C supports arguments -Wold-style-definition: YES 00:03:41.841 Compiler for C supports arguments -Wpointer-arith: YES 00:03:41.841 Compiler for C supports arguments -Wsign-compare: YES 00:03:41.841 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:41.841 Compiler for C supports arguments -Wundef: YES 00:03:41.841 Compiler for C supports arguments -Wwrite-strings: YES 00:03:41.841 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:41.841 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:41.841 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:41.841 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:41.841 Program objdump found: YES (/usr/bin/objdump) 00:03:41.841 Compiler for C supports arguments -mavx512f: YES 00:03:41.841 Checking if "AVX512 checking" compiles: YES 00:03:41.841 Fetching value of define "__SSE4_2__" : 1 00:03:41.841 Fetching value of define "__AES__" : 1 00:03:41.841 Fetching value of define "__AVX__" : 1 00:03:41.841 Fetching value of define "__AVX2__" : 1 00:03:41.841 Fetching value of define "__AVX512BW__" : 1 00:03:41.841 Fetching value of define "__AVX512CD__" : 1 00:03:41.841 Fetching value of define "__AVX512DQ__" : 1 00:03:41.841 Fetching value of define "__AVX512F__" : 1 00:03:41.841 Fetching value of define "__AVX512VL__" : 1 00:03:41.841 Fetching value of define "__PCLMUL__" : 1 00:03:41.841 Fetching value of define "__RDRND__" : 1 00:03:41.841 Fetching value of define "__RDSEED__" : 1 00:03:41.841 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:41.841 Fetching value of define "__znver1__" : (undefined) 00:03:41.841 Fetching value of define "__znver2__" : (undefined) 00:03:41.841 Fetching value of define "__znver3__" : (undefined) 00:03:41.841 Fetching value of define "__znver4__" : (undefined) 00:03:41.841 Library asan found: YES 00:03:41.841 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:41.841 Message: lib/log: Defining dependency "log" 00:03:41.841 Message: lib/kvargs: Defining dependency "kvargs" 00:03:41.841 Message: lib/telemetry: Defining dependency "telemetry" 00:03:41.841 Library rt found: YES 00:03:41.841 Checking for function "getentropy" : NO 00:03:41.841 Message: lib/eal: Defining dependency "eal" 00:03:41.841 Message: lib/ring: Defining dependency "ring" 00:03:41.841 Message: lib/rcu: Defining dependency "rcu" 00:03:41.841 Message: lib/mempool: Defining dependency "mempool" 00:03:41.841 Message: lib/mbuf: Defining dependency "mbuf" 00:03:41.841 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:41.841 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:41.841 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:41.841 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:41.841 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:41.841 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:41.841 Compiler for C supports arguments -mpclmul: YES 00:03:41.841 Compiler for C supports arguments -maes: YES 00:03:41.841 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:41.841 Compiler for C supports arguments -mavx512bw: YES 00:03:41.841 Compiler for C supports arguments -mavx512dq: YES 00:03:41.841 Compiler for C supports arguments -mavx512vl: YES 00:03:41.841 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:41.841 Compiler for C supports arguments -mavx2: YES 00:03:41.841 Compiler for C supports arguments -mavx: YES 00:03:41.841 Message: lib/net: Defining dependency "net" 00:03:41.841 Message: lib/meter: Defining dependency "meter" 00:03:41.841 Message: lib/ethdev: Defining dependency "ethdev" 00:03:41.841 Message: lib/pci: Defining dependency "pci" 00:03:41.841 Message: lib/cmdline: Defining dependency "cmdline" 00:03:41.841 Message: lib/hash: Defining dependency "hash" 00:03:41.841 Message: lib/timer: Defining dependency "timer" 00:03:41.841 Message: lib/compressdev: Defining dependency "compressdev" 00:03:41.841 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:41.841 Message: lib/dmadev: Defining dependency "dmadev" 00:03:41.841 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:41.841 Message: lib/power: Defining dependency "power" 00:03:41.841 Message: lib/reorder: Defining dependency "reorder" 00:03:41.841 Message: lib/security: Defining dependency "security" 00:03:41.841 Has header "linux/userfaultfd.h" : YES 00:03:41.841 Has header "linux/vduse.h" : YES 00:03:41.841 Message: lib/vhost: Defining dependency "vhost" 00:03:41.841 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:41.841 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:41.841 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:41.841 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:41.841 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:41.841 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:41.841 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:41.841 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:41.841 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:41.841 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:41.841 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:41.841 Configuring doxy-api-html.conf using configuration 00:03:41.841 Configuring doxy-api-man.conf using configuration 00:03:41.841 Program mandb found: YES (/usr/bin/mandb) 00:03:41.841 Program sphinx-build found: NO 00:03:41.841 Configuring rte_build_config.h using configuration 00:03:41.841 Message: 00:03:41.841 ================= 00:03:41.841 Applications Enabled 00:03:41.841 ================= 00:03:41.841 00:03:41.841 apps: 00:03:41.841 00:03:41.841 00:03:41.841 Message: 00:03:41.841 ================= 00:03:41.841 Libraries Enabled 00:03:41.841 ================= 00:03:41.841 00:03:41.841 libs: 00:03:41.841 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:41.841 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:41.841 cryptodev, dmadev, power, reorder, security, vhost, 00:03:41.841 00:03:41.841 Message: 00:03:41.841 =============== 00:03:41.841 Drivers Enabled 00:03:41.841 =============== 00:03:41.841 00:03:41.841 common: 00:03:41.841 00:03:41.841 bus: 00:03:41.841 pci, vdev, 00:03:41.841 mempool: 00:03:41.841 ring, 00:03:41.841 dma: 00:03:41.841 00:03:41.841 net: 00:03:41.841 00:03:41.841 crypto: 00:03:41.841 00:03:41.841 compress: 00:03:41.841 00:03:41.841 vdpa: 00:03:41.841 00:03:41.841 00:03:41.841 Message: 00:03:41.841 ================= 00:03:41.841 Content Skipped 00:03:41.841 ================= 00:03:41.841 00:03:41.841 apps: 00:03:41.841 dumpcap: explicitly disabled via build config 00:03:41.841 graph: explicitly disabled via build config 00:03:41.841 pdump: explicitly disabled via build config 00:03:41.841 proc-info: explicitly disabled via build config 00:03:41.841 test-acl: explicitly disabled via build config 00:03:41.841 test-bbdev: explicitly disabled via build config 00:03:41.841 test-cmdline: explicitly disabled via build config 00:03:41.841 test-compress-perf: explicitly disabled via build config 00:03:41.841 test-crypto-perf: explicitly disabled via build config 00:03:41.841 test-dma-perf: explicitly disabled via build config 00:03:41.841 test-eventdev: explicitly disabled via build config 00:03:41.841 test-fib: explicitly disabled via build config 00:03:41.841 test-flow-perf: explicitly disabled via build config 00:03:41.841 test-gpudev: explicitly disabled via build config 00:03:41.841 test-mldev: explicitly disabled via build config 00:03:41.841 test-pipeline: explicitly disabled via build config 00:03:41.841 test-pmd: explicitly disabled via build config 00:03:41.841 test-regex: explicitly disabled via build config 00:03:41.841 test-sad: explicitly disabled via build config 00:03:41.841 test-security-perf: explicitly disabled via build config 00:03:41.841 00:03:41.841 libs: 00:03:41.841 argparse: explicitly disabled via build config 00:03:41.841 metrics: explicitly disabled via build config 00:03:41.841 acl: explicitly disabled via build config 00:03:41.841 bbdev: explicitly disabled via build config 00:03:41.842 bitratestats: explicitly disabled via build config 00:03:41.842 bpf: explicitly disabled via build config 00:03:41.842 cfgfile: explicitly disabled via build config 00:03:41.842 distributor: explicitly disabled via build config 00:03:41.842 efd: explicitly disabled via build config 00:03:41.842 eventdev: explicitly disabled via build config 00:03:41.842 dispatcher: explicitly disabled via build config 00:03:41.842 gpudev: explicitly disabled via build config 00:03:41.842 gro: explicitly disabled via build config 00:03:41.842 gso: explicitly disabled via build config 00:03:41.842 ip_frag: explicitly disabled via build config 00:03:41.842 jobstats: explicitly disabled via build config 00:03:41.842 latencystats: explicitly disabled via build config 00:03:41.842 lpm: explicitly disabled via build config 00:03:41.842 member: explicitly disabled via build config 00:03:41.842 pcapng: explicitly disabled via build config 00:03:41.842 rawdev: explicitly disabled via build config 00:03:41.842 regexdev: explicitly disabled via build config 00:03:41.842 mldev: explicitly disabled via build config 00:03:41.842 rib: explicitly disabled via build config 00:03:41.842 sched: explicitly disabled via build config 00:03:41.842 stack: explicitly disabled via build config 00:03:41.842 ipsec: explicitly disabled via build config 00:03:41.842 pdcp: explicitly disabled via build config 00:03:41.842 fib: explicitly disabled via build config 00:03:41.842 port: explicitly disabled via build config 00:03:41.842 pdump: explicitly disabled via build config 00:03:41.842 table: explicitly disabled via build config 00:03:41.842 pipeline: explicitly disabled via build config 00:03:41.842 graph: explicitly disabled via build config 00:03:41.842 node: explicitly disabled via build config 00:03:41.842 00:03:41.842 drivers: 00:03:41.842 common/cpt: not in enabled drivers build config 00:03:41.842 common/dpaax: not in enabled drivers build config 00:03:41.842 common/iavf: not in enabled drivers build config 00:03:41.842 common/idpf: not in enabled drivers build config 00:03:41.842 common/ionic: not in enabled drivers build config 00:03:41.842 common/mvep: not in enabled drivers build config 00:03:41.842 common/octeontx: not in enabled drivers build config 00:03:41.842 bus/auxiliary: not in enabled drivers build config 00:03:41.842 bus/cdx: not in enabled drivers build config 00:03:41.842 bus/dpaa: not in enabled drivers build config 00:03:41.842 bus/fslmc: not in enabled drivers build config 00:03:41.842 bus/ifpga: not in enabled drivers build config 00:03:41.842 bus/platform: not in enabled drivers build config 00:03:41.842 bus/uacce: not in enabled drivers build config 00:03:41.842 bus/vmbus: not in enabled drivers build config 00:03:41.842 common/cnxk: not in enabled drivers build config 00:03:41.842 common/mlx5: not in enabled drivers build config 00:03:41.842 common/nfp: not in enabled drivers build config 00:03:41.842 common/nitrox: not in enabled drivers build config 00:03:41.842 common/qat: not in enabled drivers build config 00:03:41.842 common/sfc_efx: not in enabled drivers build config 00:03:41.842 mempool/bucket: not in enabled drivers build config 00:03:41.842 mempool/cnxk: not in enabled drivers build config 00:03:41.842 mempool/dpaa: not in enabled drivers build config 00:03:41.842 mempool/dpaa2: not in enabled drivers build config 00:03:41.842 mempool/octeontx: not in enabled drivers build config 00:03:41.842 mempool/stack: not in enabled drivers build config 00:03:41.842 dma/cnxk: not in enabled drivers build config 00:03:41.842 dma/dpaa: not in enabled drivers build config 00:03:41.842 dma/dpaa2: not in enabled drivers build config 00:03:41.842 dma/hisilicon: not in enabled drivers build config 00:03:41.842 dma/idxd: not in enabled drivers build config 00:03:41.842 dma/ioat: not in enabled drivers build config 00:03:41.842 dma/skeleton: not in enabled drivers build config 00:03:41.842 net/af_packet: not in enabled drivers build config 00:03:41.842 net/af_xdp: not in enabled drivers build config 00:03:41.842 net/ark: not in enabled drivers build config 00:03:41.842 net/atlantic: not in enabled drivers build config 00:03:41.842 net/avp: not in enabled drivers build config 00:03:41.842 net/axgbe: not in enabled drivers build config 00:03:41.842 net/bnx2x: not in enabled drivers build config 00:03:41.842 net/bnxt: not in enabled drivers build config 00:03:41.842 net/bonding: not in enabled drivers build config 00:03:41.842 net/cnxk: not in enabled drivers build config 00:03:41.842 net/cpfl: not in enabled drivers build config 00:03:41.842 net/cxgbe: not in enabled drivers build config 00:03:41.842 net/dpaa: not in enabled drivers build config 00:03:41.842 net/dpaa2: not in enabled drivers build config 00:03:41.842 net/e1000: not in enabled drivers build config 00:03:41.842 net/ena: not in enabled drivers build config 00:03:41.842 net/enetc: not in enabled drivers build config 00:03:41.842 net/enetfec: not in enabled drivers build config 00:03:41.842 net/enic: not in enabled drivers build config 00:03:41.842 net/failsafe: not in enabled drivers build config 00:03:41.842 net/fm10k: not in enabled drivers build config 00:03:41.842 net/gve: not in enabled drivers build config 00:03:41.842 net/hinic: not in enabled drivers build config 00:03:41.842 net/hns3: not in enabled drivers build config 00:03:41.842 net/i40e: not in enabled drivers build config 00:03:41.842 net/iavf: not in enabled drivers build config 00:03:41.842 net/ice: not in enabled drivers build config 00:03:41.842 net/idpf: not in enabled drivers build config 00:03:41.842 net/igc: not in enabled drivers build config 00:03:41.842 net/ionic: not in enabled drivers build config 00:03:41.842 net/ipn3ke: not in enabled drivers build config 00:03:41.842 net/ixgbe: not in enabled drivers build config 00:03:41.842 net/mana: not in enabled drivers build config 00:03:41.842 net/memif: not in enabled drivers build config 00:03:41.842 net/mlx4: not in enabled drivers build config 00:03:41.842 net/mlx5: not in enabled drivers build config 00:03:41.842 net/mvneta: not in enabled drivers build config 00:03:41.842 net/mvpp2: not in enabled drivers build config 00:03:41.842 net/netvsc: not in enabled drivers build config 00:03:41.842 net/nfb: not in enabled drivers build config 00:03:41.842 net/nfp: not in enabled drivers build config 00:03:41.842 net/ngbe: not in enabled drivers build config 00:03:41.842 net/null: not in enabled drivers build config 00:03:41.842 net/octeontx: not in enabled drivers build config 00:03:41.842 net/octeon_ep: not in enabled drivers build config 00:03:41.842 net/pcap: not in enabled drivers build config 00:03:41.842 net/pfe: not in enabled drivers build config 00:03:41.842 net/qede: not in enabled drivers build config 00:03:41.842 net/ring: not in enabled drivers build config 00:03:41.842 net/sfc: not in enabled drivers build config 00:03:41.842 net/softnic: not in enabled drivers build config 00:03:41.842 net/tap: not in enabled drivers build config 00:03:41.842 net/thunderx: not in enabled drivers build config 00:03:41.842 net/txgbe: not in enabled drivers build config 00:03:41.842 net/vdev_netvsc: not in enabled drivers build config 00:03:41.842 net/vhost: not in enabled drivers build config 00:03:41.842 net/virtio: not in enabled drivers build config 00:03:41.842 net/vmxnet3: not in enabled drivers build config 00:03:41.842 raw/*: missing internal dependency, "rawdev" 00:03:41.842 crypto/armv8: not in enabled drivers build config 00:03:41.842 crypto/bcmfs: not in enabled drivers build config 00:03:41.842 crypto/caam_jr: not in enabled drivers build config 00:03:41.842 crypto/ccp: not in enabled drivers build config 00:03:41.842 crypto/cnxk: not in enabled drivers build config 00:03:41.842 crypto/dpaa_sec: not in enabled drivers build config 00:03:41.842 crypto/dpaa2_sec: not in enabled drivers build config 00:03:41.842 crypto/ipsec_mb: not in enabled drivers build config 00:03:41.842 crypto/mlx5: not in enabled drivers build config 00:03:41.842 crypto/mvsam: not in enabled drivers build config 00:03:41.842 crypto/nitrox: not in enabled drivers build config 00:03:41.842 crypto/null: not in enabled drivers build config 00:03:41.842 crypto/octeontx: not in enabled drivers build config 00:03:41.842 crypto/openssl: not in enabled drivers build config 00:03:41.842 crypto/scheduler: not in enabled drivers build config 00:03:41.842 crypto/uadk: not in enabled drivers build config 00:03:41.842 crypto/virtio: not in enabled drivers build config 00:03:41.842 compress/isal: not in enabled drivers build config 00:03:41.842 compress/mlx5: not in enabled drivers build config 00:03:41.842 compress/nitrox: not in enabled drivers build config 00:03:41.842 compress/octeontx: not in enabled drivers build config 00:03:41.842 compress/zlib: not in enabled drivers build config 00:03:41.842 regex/*: missing internal dependency, "regexdev" 00:03:41.842 ml/*: missing internal dependency, "mldev" 00:03:41.842 vdpa/ifc: not in enabled drivers build config 00:03:41.842 vdpa/mlx5: not in enabled drivers build config 00:03:41.842 vdpa/nfp: not in enabled drivers build config 00:03:41.842 vdpa/sfc: not in enabled drivers build config 00:03:41.842 event/*: missing internal dependency, "eventdev" 00:03:41.842 baseband/*: missing internal dependency, "bbdev" 00:03:41.842 gpu/*: missing internal dependency, "gpudev" 00:03:41.842 00:03:41.842 00:03:41.842 Build targets in project: 84 00:03:41.842 00:03:41.842 DPDK 24.03.0 00:03:41.842 00:03:41.842 User defined options 00:03:41.842 buildtype : debug 00:03:41.842 default_library : shared 00:03:41.842 libdir : lib 00:03:41.842 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:41.842 b_sanitize : address 00:03:41.842 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:41.842 c_link_args : 00:03:41.842 cpu_instruction_set: native 00:03:41.842 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:41.842 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:41.842 enable_docs : false 00:03:41.842 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:41.842 enable_kmods : false 00:03:41.842 max_lcores : 128 00:03:41.842 tests : false 00:03:41.842 00:03:41.842 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:41.842 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:41.842 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:41.843 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:41.843 [3/267] Linking static target lib/librte_kvargs.a 00:03:42.105 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:42.105 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:42.105 [6/267] Linking static target lib/librte_log.a 00:03:42.367 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:42.367 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:42.367 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:42.367 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:42.367 [11/267] Linking static target lib/librte_telemetry.a 00:03:42.367 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:42.367 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.367 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:42.367 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:42.367 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:42.367 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:42.627 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:42.887 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:42.887 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:42.887 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:42.887 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:42.887 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:42.887 [24/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.887 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:42.887 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:42.887 [27/267] Linking target lib/librte_log.so.24.1 00:03:42.887 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:43.148 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:43.148 [30/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.148 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:43.148 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:43.148 [33/267] Linking target lib/librte_telemetry.so.24.1 00:03:43.148 [34/267] Linking target lib/librte_kvargs.so.24.1 00:03:43.148 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:43.148 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:43.408 [37/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:43.408 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:43.408 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:43.408 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:43.408 [41/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:43.408 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:43.408 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:43.408 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:43.408 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:43.668 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:43.668 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:43.668 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:43.668 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:43.668 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:43.668 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:43.930 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:43.930 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:43.930 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:43.930 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:43.931 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:43.931 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:44.192 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:44.192 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:44.192 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:44.192 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:44.455 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:44.455 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:44.455 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:44.455 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:44.455 [66/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:44.455 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:44.455 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:44.455 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:44.455 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:44.716 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:44.716 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:44.716 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:44.716 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:44.716 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:44.976 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:44.976 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:44.976 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:44.976 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:44.976 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:44.976 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:44.976 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:45.237 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:45.237 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:45.237 [85/267] Linking static target lib/librte_eal.a 00:03:45.237 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:45.237 [87/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:45.497 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:45.497 [89/267] Linking static target lib/librte_ring.a 00:03:45.497 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:45.497 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:45.497 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:45.497 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:45.497 [94/267] Linking static target lib/librte_mempool.a 00:03:45.757 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:45.757 [96/267] Linking static target lib/librte_rcu.a 00:03:45.757 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.757 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:46.017 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:46.017 [100/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:46.017 [101/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.277 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:46.277 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:46.277 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:46.277 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:46.277 [106/267] Linking static target lib/librte_mbuf.a 00:03:46.277 [107/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:46.277 [108/267] Linking static target lib/librte_meter.a 00:03:46.277 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:46.277 [110/267] Linking static target lib/librte_net.a 00:03:46.277 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:46.537 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:46.537 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:46.537 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.537 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.537 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.537 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:46.796 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:47.057 [119/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.057 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:47.057 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:47.316 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:47.316 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:47.316 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:47.316 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:47.316 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:47.316 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:47.316 [128/267] Linking static target lib/librte_pci.a 00:03:47.316 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:47.316 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:47.316 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:47.579 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:47.579 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:47.579 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:47.579 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:47.579 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:47.579 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:47.579 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:47.579 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:47.579 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:47.579 [141/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.579 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:47.579 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:47.839 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:47.839 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:47.839 [146/267] Linking static target lib/librte_cmdline.a 00:03:47.839 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:47.839 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:48.099 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:48.099 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:48.099 [151/267] Linking static target lib/librte_timer.a 00:03:48.099 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:48.099 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:48.359 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:48.359 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:48.620 [156/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:48.620 [157/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.620 [158/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:48.620 [159/267] Linking static target lib/librte_dmadev.a 00:03:48.620 [160/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:48.620 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:48.620 [162/267] Linking static target lib/librte_compressdev.a 00:03:48.620 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:48.620 [164/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:48.880 [165/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:48.880 [166/267] Linking static target lib/librte_ethdev.a 00:03:48.880 [167/267] Linking static target lib/librte_hash.a 00:03:48.880 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:49.141 [169/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.141 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:49.141 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:49.141 [172/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:49.141 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:49.401 [174/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:49.401 [175/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.401 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:49.401 [177/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.401 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:49.401 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:49.661 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:49.661 [181/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:49.661 [182/267] Linking static target lib/librte_cryptodev.a 00:03:49.661 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.661 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:49.661 [185/267] Linking static target lib/librte_power.a 00:03:49.921 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:49.921 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:49.921 [188/267] Linking static target lib/librte_reorder.a 00:03:49.921 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:49.921 [190/267] Linking static target lib/librte_security.a 00:03:49.921 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:49.921 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:50.183 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.444 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:50.444 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.703 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.703 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:50.703 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:50.703 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:50.962 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:50.962 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:50.962 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:50.962 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:51.222 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:51.222 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:51.222 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:51.222 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:51.222 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:51.482 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:51.482 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.482 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:51.482 [212/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:51.482 [213/267] Linking static target drivers/librte_bus_pci.a 00:03:51.482 [214/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:51.482 [215/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:51.482 [216/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:51.482 [217/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:51.482 [218/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:51.482 [219/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:51.482 [220/267] Linking static target drivers/librte_bus_vdev.a 00:03:51.742 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:51.742 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:51.742 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:51.742 [224/267] Linking static target drivers/librte_mempool_ring.a 00:03:51.742 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.003 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.575 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:53.218 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.218 [229/267] Linking target lib/librte_eal.so.24.1 00:03:53.218 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:53.218 [231/267] Linking target lib/librte_ring.so.24.1 00:03:53.218 [232/267] Linking target lib/librte_pci.so.24.1 00:03:53.218 [233/267] Linking target lib/librte_timer.so.24.1 00:03:53.218 [234/267] Linking target lib/librte_dmadev.so.24.1 00:03:53.218 [235/267] Linking target lib/librte_meter.so.24.1 00:03:53.218 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:53.478 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:53.478 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:53.478 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:53.478 [240/267] Linking target lib/librte_mempool.so.24.1 00:03:53.478 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:53.478 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:53.478 [243/267] Linking target lib/librte_rcu.so.24.1 00:03:53.478 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:53.478 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:53.478 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:53.478 [247/267] Linking target lib/librte_mbuf.so.24.1 00:03:53.478 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:53.739 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:53.739 [250/267] Linking target lib/librte_reorder.so.24.1 00:03:53.739 [251/267] Linking target lib/librte_net.so.24.1 00:03:53.739 [252/267] Linking target lib/librte_compressdev.so.24.1 00:03:53.739 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:53.739 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:53.739 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:53.739 [256/267] Linking target lib/librte_cmdline.so.24.1 00:03:53.739 [257/267] Linking target lib/librte_hash.so.24.1 00:03:53.739 [258/267] Linking target lib/librte_security.so.24.1 00:03:54.000 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:54.260 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.522 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:54.522 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:54.782 [263/267] Linking target lib/librte_power.so.24.1 00:03:55.350 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:55.350 [265/267] Linking static target lib/librte_vhost.a 00:03:56.731 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.731 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:56.731 INFO: autodetecting backend as ninja 00:03:56.731 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:14.830 CC lib/ut/ut.o 00:04:14.830 CC lib/log/log.o 00:04:14.830 CC lib/ut_mock/mock.o 00:04:14.830 CC lib/log/log_flags.o 00:04:14.830 CC lib/log/log_deprecated.o 00:04:14.830 LIB libspdk_ut.a 00:04:14.830 LIB libspdk_ut_mock.a 00:04:14.830 LIB libspdk_log.a 00:04:14.830 SO libspdk_ut.so.2.0 00:04:14.830 SO libspdk_ut_mock.so.6.0 00:04:14.830 SO libspdk_log.so.7.1 00:04:14.830 SYMLINK libspdk_ut.so 00:04:14.830 SYMLINK libspdk_ut_mock.so 00:04:14.830 SYMLINK libspdk_log.so 00:04:14.830 CC lib/ioat/ioat.o 00:04:14.830 CC lib/dma/dma.o 00:04:14.830 CXX lib/trace_parser/trace.o 00:04:14.830 CC lib/util/base64.o 00:04:14.830 CC lib/util/bit_array.o 00:04:14.830 CC lib/util/cpuset.o 00:04:14.830 CC lib/util/crc32c.o 00:04:14.830 CC lib/util/crc32.o 00:04:14.830 CC lib/util/crc16.o 00:04:14.830 CC lib/vfio_user/host/vfio_user_pci.o 00:04:14.830 CC lib/util/crc32_ieee.o 00:04:14.830 CC lib/util/crc64.o 00:04:14.830 CC lib/util/dif.o 00:04:14.830 CC lib/util/fd.o 00:04:14.830 LIB libspdk_dma.a 00:04:14.830 CC lib/util/fd_group.o 00:04:14.830 SO libspdk_dma.so.5.0 00:04:14.830 LIB libspdk_ioat.a 00:04:14.830 SO libspdk_ioat.so.7.0 00:04:14.830 CC lib/util/file.o 00:04:14.830 CC lib/util/hexlify.o 00:04:14.830 CC lib/util/iov.o 00:04:14.830 SYMLINK libspdk_dma.so 00:04:14.830 CC lib/util/math.o 00:04:14.830 CC lib/util/net.o 00:04:14.830 SYMLINK libspdk_ioat.so 00:04:14.830 CC lib/util/pipe.o 00:04:14.830 CC lib/vfio_user/host/vfio_user.o 00:04:14.830 CC lib/util/strerror_tls.o 00:04:14.830 CC lib/util/string.o 00:04:14.830 CC lib/util/uuid.o 00:04:14.830 CC lib/util/xor.o 00:04:14.830 CC lib/util/zipf.o 00:04:14.830 CC lib/util/md5.o 00:04:14.830 LIB libspdk_vfio_user.a 00:04:14.830 SO libspdk_vfio_user.so.5.0 00:04:14.830 SYMLINK libspdk_vfio_user.so 00:04:14.830 LIB libspdk_util.a 00:04:14.830 SO libspdk_util.so.10.1 00:04:14.830 LIB libspdk_trace_parser.a 00:04:14.830 SO libspdk_trace_parser.so.6.0 00:04:14.830 SYMLINK libspdk_util.so 00:04:14.830 SYMLINK libspdk_trace_parser.so 00:04:14.830 CC lib/env_dpdk/memory.o 00:04:14.830 CC lib/env_dpdk/env.o 00:04:14.830 CC lib/env_dpdk/pci.o 00:04:14.830 CC lib/env_dpdk/threads.o 00:04:14.830 CC lib/env_dpdk/init.o 00:04:14.830 CC lib/vmd/vmd.o 00:04:14.830 CC lib/conf/conf.o 00:04:14.830 CC lib/idxd/idxd.o 00:04:14.830 CC lib/rdma_utils/rdma_utils.o 00:04:14.830 CC lib/json/json_parse.o 00:04:14.830 CC lib/json/json_util.o 00:04:14.830 CC lib/vmd/led.o 00:04:14.830 LIB libspdk_rdma_utils.a 00:04:14.830 CC lib/env_dpdk/pci_ioat.o 00:04:14.830 LIB libspdk_conf.a 00:04:14.830 SO libspdk_rdma_utils.so.1.0 00:04:14.830 SO libspdk_conf.so.6.0 00:04:14.830 SYMLINK libspdk_conf.so 00:04:14.830 SYMLINK libspdk_rdma_utils.so 00:04:14.830 CC lib/env_dpdk/pci_virtio.o 00:04:14.830 CC lib/json/json_write.o 00:04:14.830 CC lib/env_dpdk/pci_vmd.o 00:04:14.830 CC lib/idxd/idxd_user.o 00:04:14.830 CC lib/env_dpdk/pci_idxd.o 00:04:14.830 CC lib/env_dpdk/pci_event.o 00:04:14.830 CC lib/rdma_provider/common.o 00:04:14.830 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:14.830 CC lib/env_dpdk/sigbus_handler.o 00:04:14.830 CC lib/env_dpdk/pci_dpdk.o 00:04:14.830 LIB libspdk_json.a 00:04:14.830 CC lib/idxd/idxd_kernel.o 00:04:14.830 SO libspdk_json.so.6.0 00:04:14.830 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:14.830 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:14.830 SYMLINK libspdk_json.so 00:04:14.830 LIB libspdk_rdma_provider.a 00:04:14.830 SO libspdk_rdma_provider.so.7.0 00:04:14.830 LIB libspdk_vmd.a 00:04:14.830 SYMLINK libspdk_rdma_provider.so 00:04:14.830 LIB libspdk_idxd.a 00:04:14.830 SO libspdk_vmd.so.6.0 00:04:14.830 SO libspdk_idxd.so.12.1 00:04:14.830 CC lib/jsonrpc/jsonrpc_server.o 00:04:14.830 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:14.830 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:14.830 CC lib/jsonrpc/jsonrpc_client.o 00:04:14.830 SYMLINK libspdk_vmd.so 00:04:14.831 SYMLINK libspdk_idxd.so 00:04:14.831 LIB libspdk_jsonrpc.a 00:04:14.831 SO libspdk_jsonrpc.so.6.0 00:04:14.831 SYMLINK libspdk_jsonrpc.so 00:04:14.831 LIB libspdk_env_dpdk.a 00:04:14.831 SO libspdk_env_dpdk.so.15.1 00:04:14.831 CC lib/rpc/rpc.o 00:04:14.831 SYMLINK libspdk_env_dpdk.so 00:04:15.088 LIB libspdk_rpc.a 00:04:15.088 SO libspdk_rpc.so.6.0 00:04:15.088 SYMLINK libspdk_rpc.so 00:04:15.345 CC lib/trace/trace_flags.o 00:04:15.345 CC lib/trace/trace.o 00:04:15.345 CC lib/keyring/keyring.o 00:04:15.345 CC lib/trace/trace_rpc.o 00:04:15.345 CC lib/keyring/keyring_rpc.o 00:04:15.345 CC lib/notify/notify.o 00:04:15.345 CC lib/notify/notify_rpc.o 00:04:15.602 LIB libspdk_notify.a 00:04:15.602 SO libspdk_notify.so.6.0 00:04:15.602 LIB libspdk_keyring.a 00:04:15.602 SYMLINK libspdk_notify.so 00:04:15.602 SO libspdk_keyring.so.2.0 00:04:15.602 LIB libspdk_trace.a 00:04:15.602 SYMLINK libspdk_keyring.so 00:04:15.602 SO libspdk_trace.so.11.0 00:04:15.860 SYMLINK libspdk_trace.so 00:04:16.118 CC lib/sock/sock_rpc.o 00:04:16.118 CC lib/thread/thread.o 00:04:16.118 CC lib/sock/sock.o 00:04:16.118 CC lib/thread/iobuf.o 00:04:16.375 LIB libspdk_sock.a 00:04:16.375 SO libspdk_sock.so.10.0 00:04:16.632 SYMLINK libspdk_sock.so 00:04:16.632 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:16.632 CC lib/nvme/nvme_fabric.o 00:04:16.632 CC lib/nvme/nvme_ctrlr.o 00:04:16.632 CC lib/nvme/nvme_ns.o 00:04:16.632 CC lib/nvme/nvme_pcie_common.o 00:04:16.632 CC lib/nvme/nvme_ns_cmd.o 00:04:16.632 CC lib/nvme/nvme_qpair.o 00:04:16.632 CC lib/nvme/nvme.o 00:04:16.632 CC lib/nvme/nvme_pcie.o 00:04:17.199 LIB libspdk_thread.a 00:04:17.457 SO libspdk_thread.so.11.0 00:04:17.457 CC lib/nvme/nvme_quirks.o 00:04:17.457 CC lib/nvme/nvme_transport.o 00:04:17.457 CC lib/nvme/nvme_discovery.o 00:04:17.457 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:17.457 SYMLINK libspdk_thread.so 00:04:17.457 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:17.457 CC lib/nvme/nvme_tcp.o 00:04:17.457 CC lib/nvme/nvme_opal.o 00:04:17.714 CC lib/nvme/nvme_io_msg.o 00:04:17.714 CC lib/nvme/nvme_poll_group.o 00:04:17.714 CC lib/nvme/nvme_zns.o 00:04:17.970 CC lib/nvme/nvme_stubs.o 00:04:17.970 CC lib/nvme/nvme_auth.o 00:04:17.970 CC lib/nvme/nvme_cuse.o 00:04:17.970 CC lib/nvme/nvme_rdma.o 00:04:18.226 CC lib/accel/accel.o 00:04:18.226 CC lib/blob/blobstore.o 00:04:18.226 CC lib/init/json_config.o 00:04:18.226 CC lib/virtio/virtio.o 00:04:18.226 CC lib/init/subsystem.o 00:04:18.483 CC lib/init/subsystem_rpc.o 00:04:18.483 CC lib/accel/accel_rpc.o 00:04:18.483 CC lib/accel/accel_sw.o 00:04:18.483 CC lib/init/rpc.o 00:04:18.483 CC lib/virtio/virtio_vhost_user.o 00:04:18.483 CC lib/blob/request.o 00:04:18.741 CC lib/blob/zeroes.o 00:04:18.741 LIB libspdk_init.a 00:04:18.741 SO libspdk_init.so.6.0 00:04:18.741 SYMLINK libspdk_init.so 00:04:18.741 CC lib/blob/blob_bs_dev.o 00:04:18.741 CC lib/virtio/virtio_vfio_user.o 00:04:18.741 CC lib/virtio/virtio_pci.o 00:04:18.999 CC lib/event/app.o 00:04:18.999 CC lib/event/reactor.o 00:04:18.999 CC lib/fsdev/fsdev.o 00:04:18.999 CC lib/fsdev/fsdev_io.o 00:04:18.999 CC lib/event/log_rpc.o 00:04:18.999 CC lib/fsdev/fsdev_rpc.o 00:04:18.999 LIB libspdk_accel.a 00:04:18.999 CC lib/event/app_rpc.o 00:04:18.999 CC lib/event/scheduler_static.o 00:04:18.999 LIB libspdk_virtio.a 00:04:18.999 SO libspdk_accel.so.16.0 00:04:19.373 SO libspdk_virtio.so.7.0 00:04:19.373 SYMLINK libspdk_accel.so 00:04:19.373 SYMLINK libspdk_virtio.so 00:04:19.373 LIB libspdk_nvme.a 00:04:19.373 CC lib/bdev/bdev_rpc.o 00:04:19.373 CC lib/bdev/bdev.o 00:04:19.373 CC lib/bdev/bdev_zone.o 00:04:19.373 CC lib/bdev/part.o 00:04:19.373 CC lib/bdev/scsi_nvme.o 00:04:19.373 LIB libspdk_event.a 00:04:19.373 SO libspdk_nvme.so.15.0 00:04:19.373 SO libspdk_event.so.14.0 00:04:19.373 SYMLINK libspdk_event.so 00:04:19.631 LIB libspdk_fsdev.a 00:04:19.631 SO libspdk_fsdev.so.2.0 00:04:19.631 SYMLINK libspdk_nvme.so 00:04:19.631 SYMLINK libspdk_fsdev.so 00:04:19.890 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:20.457 LIB libspdk_fuse_dispatcher.a 00:04:20.457 SO libspdk_fuse_dispatcher.so.1.0 00:04:20.457 SYMLINK libspdk_fuse_dispatcher.so 00:04:21.023 LIB libspdk_blob.a 00:04:21.023 SO libspdk_blob.so.12.0 00:04:21.023 SYMLINK libspdk_blob.so 00:04:21.280 CC lib/lvol/lvol.o 00:04:21.280 CC lib/blobfs/tree.o 00:04:21.280 CC lib/blobfs/blobfs.o 00:04:21.846 LIB libspdk_bdev.a 00:04:21.846 SO libspdk_bdev.so.17.0 00:04:21.846 SYMLINK libspdk_bdev.so 00:04:22.104 CC lib/ftl/ftl_core.o 00:04:22.104 CC lib/scsi/lun.o 00:04:22.104 CC lib/scsi/dev.o 00:04:22.104 CC lib/ftl/ftl_init.o 00:04:22.104 CC lib/scsi/port.o 00:04:22.104 CC lib/nbd/nbd.o 00:04:22.104 CC lib/nvmf/ctrlr.o 00:04:22.104 CC lib/ublk/ublk.o 00:04:22.104 LIB libspdk_blobfs.a 00:04:22.104 SO libspdk_blobfs.so.11.0 00:04:22.104 CC lib/nvmf/ctrlr_discovery.o 00:04:22.104 SYMLINK libspdk_blobfs.so 00:04:22.104 CC lib/ublk/ublk_rpc.o 00:04:22.361 CC lib/ftl/ftl_layout.o 00:04:22.361 LIB libspdk_lvol.a 00:04:22.361 CC lib/nbd/nbd_rpc.o 00:04:22.361 SO libspdk_lvol.so.11.0 00:04:22.361 SYMLINK libspdk_lvol.so 00:04:22.361 CC lib/nvmf/ctrlr_bdev.o 00:04:22.361 CC lib/scsi/scsi.o 00:04:22.361 CC lib/scsi/scsi_bdev.o 00:04:22.361 CC lib/scsi/scsi_pr.o 00:04:22.361 LIB libspdk_nbd.a 00:04:22.361 SO libspdk_nbd.so.7.0 00:04:22.361 CC lib/nvmf/subsystem.o 00:04:22.620 CC lib/ftl/ftl_debug.o 00:04:22.620 SYMLINK libspdk_nbd.so 00:04:22.620 CC lib/ftl/ftl_io.o 00:04:22.620 CC lib/ftl/ftl_sb.o 00:04:22.620 LIB libspdk_ublk.a 00:04:22.620 CC lib/scsi/scsi_rpc.o 00:04:22.620 SO libspdk_ublk.so.3.0 00:04:22.620 CC lib/scsi/task.o 00:04:22.620 CC lib/nvmf/nvmf.o 00:04:22.620 SYMLINK libspdk_ublk.so 00:04:22.620 CC lib/nvmf/nvmf_rpc.o 00:04:22.620 CC lib/nvmf/transport.o 00:04:22.878 CC lib/ftl/ftl_l2p.o 00:04:22.878 CC lib/ftl/ftl_l2p_flat.o 00:04:22.878 CC lib/ftl/ftl_nv_cache.o 00:04:22.878 LIB libspdk_scsi.a 00:04:22.878 CC lib/ftl/ftl_band.o 00:04:22.878 SO libspdk_scsi.so.9.0 00:04:22.878 CC lib/ftl/ftl_band_ops.o 00:04:23.137 SYMLINK libspdk_scsi.so 00:04:23.137 CC lib/nvmf/tcp.o 00:04:23.137 CC lib/ftl/ftl_writer.o 00:04:23.137 CC lib/ftl/ftl_rq.o 00:04:23.398 CC lib/ftl/ftl_reloc.o 00:04:23.398 CC lib/ftl/ftl_l2p_cache.o 00:04:23.398 CC lib/ftl/ftl_p2l.o 00:04:23.398 CC lib/ftl/ftl_p2l_log.o 00:04:23.678 CC lib/ftl/mngt/ftl_mngt.o 00:04:23.678 CC lib/nvmf/stubs.o 00:04:23.678 CC lib/nvmf/mdns_server.o 00:04:23.678 CC lib/nvmf/rdma.o 00:04:23.678 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:23.678 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:23.678 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:23.935 CC lib/nvmf/auth.o 00:04:23.935 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:23.935 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:23.935 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:23.935 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:23.935 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:23.935 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:24.193 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:24.193 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:24.193 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:24.193 CC lib/ftl/utils/ftl_conf.o 00:04:24.193 CC lib/ftl/utils/ftl_md.o 00:04:24.193 CC lib/ftl/utils/ftl_mempool.o 00:04:24.193 CC lib/ftl/utils/ftl_bitmap.o 00:04:24.193 CC lib/ftl/utils/ftl_property.o 00:04:24.453 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:24.453 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:24.453 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:24.453 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:24.453 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:24.453 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:24.453 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:24.710 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:24.710 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:24.710 CC lib/iscsi/conn.o 00:04:24.710 CC lib/iscsi/init_grp.o 00:04:24.710 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:24.710 CC lib/vhost/vhost.o 00:04:24.710 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:24.710 CC lib/iscsi/iscsi.o 00:04:24.710 CC lib/iscsi/param.o 00:04:24.710 CC lib/iscsi/portal_grp.o 00:04:24.710 CC lib/iscsi/tgt_node.o 00:04:24.967 CC lib/iscsi/iscsi_subsystem.o 00:04:24.967 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:24.967 CC lib/iscsi/iscsi_rpc.o 00:04:24.967 CC lib/iscsi/task.o 00:04:24.967 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:24.967 CC lib/ftl/base/ftl_base_dev.o 00:04:25.225 CC lib/ftl/base/ftl_base_bdev.o 00:04:25.225 CC lib/ftl/ftl_trace.o 00:04:25.225 CC lib/vhost/vhost_rpc.o 00:04:25.225 CC lib/vhost/vhost_scsi.o 00:04:25.225 CC lib/vhost/vhost_blk.o 00:04:25.483 CC lib/vhost/rte_vhost_user.o 00:04:25.483 LIB libspdk_ftl.a 00:04:25.483 SO libspdk_ftl.so.9.0 00:04:25.741 SYMLINK libspdk_ftl.so 00:04:25.999 LIB libspdk_nvmf.a 00:04:25.999 SO libspdk_nvmf.so.20.0 00:04:26.257 LIB libspdk_iscsi.a 00:04:26.257 SYMLINK libspdk_nvmf.so 00:04:26.257 SO libspdk_iscsi.so.8.0 00:04:26.257 LIB libspdk_vhost.a 00:04:26.515 SYMLINK libspdk_iscsi.so 00:04:26.515 SO libspdk_vhost.so.8.0 00:04:26.515 SYMLINK libspdk_vhost.so 00:04:26.773 CC module/env_dpdk/env_dpdk_rpc.o 00:04:26.773 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:26.773 CC module/blob/bdev/blob_bdev.o 00:04:26.773 CC module/sock/posix/posix.o 00:04:26.773 CC module/scheduler/gscheduler/gscheduler.o 00:04:26.773 CC module/accel/error/accel_error.o 00:04:26.773 CC module/accel/ioat/accel_ioat.o 00:04:26.773 CC module/fsdev/aio/fsdev_aio.o 00:04:26.773 CC module/keyring/file/keyring.o 00:04:26.773 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:26.773 LIB libspdk_env_dpdk_rpc.a 00:04:27.033 SO libspdk_env_dpdk_rpc.so.6.0 00:04:27.033 SYMLINK libspdk_env_dpdk_rpc.so 00:04:27.033 CC module/keyring/file/keyring_rpc.o 00:04:27.033 LIB libspdk_scheduler_gscheduler.a 00:04:27.033 CC module/accel/ioat/accel_ioat_rpc.o 00:04:27.033 SO libspdk_scheduler_gscheduler.so.4.0 00:04:27.033 LIB libspdk_scheduler_dpdk_governor.a 00:04:27.033 LIB libspdk_scheduler_dynamic.a 00:04:27.033 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:27.033 CC module/accel/error/accel_error_rpc.o 00:04:27.033 SO libspdk_scheduler_dynamic.so.4.0 00:04:27.033 SYMLINK libspdk_scheduler_gscheduler.so 00:04:27.033 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:27.033 LIB libspdk_keyring_file.a 00:04:27.033 SYMLINK libspdk_scheduler_dynamic.so 00:04:27.033 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:27.033 LIB libspdk_accel_ioat.a 00:04:27.033 LIB libspdk_blob_bdev.a 00:04:27.033 SO libspdk_keyring_file.so.2.0 00:04:27.033 CC module/accel/dsa/accel_dsa.o 00:04:27.033 SO libspdk_blob_bdev.so.12.0 00:04:27.033 SO libspdk_accel_ioat.so.6.0 00:04:27.033 LIB libspdk_accel_error.a 00:04:27.033 SO libspdk_accel_error.so.2.0 00:04:27.292 SYMLINK libspdk_keyring_file.so 00:04:27.292 SYMLINK libspdk_blob_bdev.so 00:04:27.292 SYMLINK libspdk_accel_ioat.so 00:04:27.292 CC module/fsdev/aio/linux_aio_mgr.o 00:04:27.292 CC module/accel/iaa/accel_iaa.o 00:04:27.292 SYMLINK libspdk_accel_error.so 00:04:27.292 CC module/accel/dsa/accel_dsa_rpc.o 00:04:27.292 CC module/keyring/linux/keyring.o 00:04:27.292 CC module/keyring/linux/keyring_rpc.o 00:04:27.292 CC module/accel/iaa/accel_iaa_rpc.o 00:04:27.292 LIB libspdk_keyring_linux.a 00:04:27.292 LIB libspdk_accel_dsa.a 00:04:27.292 SO libspdk_keyring_linux.so.1.0 00:04:27.292 SO libspdk_accel_dsa.so.5.0 00:04:27.551 CC module/bdev/delay/vbdev_delay.o 00:04:27.551 CC module/blobfs/bdev/blobfs_bdev.o 00:04:27.551 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:27.551 LIB libspdk_fsdev_aio.a 00:04:27.551 CC module/bdev/error/vbdev_error.o 00:04:27.551 SYMLINK libspdk_keyring_linux.so 00:04:27.551 LIB libspdk_accel_iaa.a 00:04:27.551 CC module/bdev/gpt/gpt.o 00:04:27.551 SYMLINK libspdk_accel_dsa.so 00:04:27.551 CC module/bdev/gpt/vbdev_gpt.o 00:04:27.551 SO libspdk_fsdev_aio.so.1.0 00:04:27.551 SO libspdk_accel_iaa.so.3.0 00:04:27.551 SYMLINK libspdk_fsdev_aio.so 00:04:27.551 CC module/bdev/error/vbdev_error_rpc.o 00:04:27.551 SYMLINK libspdk_accel_iaa.so 00:04:27.551 LIB libspdk_blobfs_bdev.a 00:04:27.551 CC module/bdev/lvol/vbdev_lvol.o 00:04:27.551 LIB libspdk_sock_posix.a 00:04:27.551 SO libspdk_blobfs_bdev.so.6.0 00:04:27.551 SO libspdk_sock_posix.so.6.0 00:04:27.551 SYMLINK libspdk_blobfs_bdev.so 00:04:27.551 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:27.818 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:27.818 CC module/bdev/null/bdev_null.o 00:04:27.818 CC module/bdev/malloc/bdev_malloc.o 00:04:27.818 LIB libspdk_bdev_gpt.a 00:04:27.818 SYMLINK libspdk_sock_posix.so 00:04:27.818 CC module/bdev/null/bdev_null_rpc.o 00:04:27.818 SO libspdk_bdev_gpt.so.6.0 00:04:27.818 LIB libspdk_bdev_error.a 00:04:27.818 SO libspdk_bdev_error.so.6.0 00:04:27.818 SYMLINK libspdk_bdev_gpt.so 00:04:27.818 SYMLINK libspdk_bdev_error.so 00:04:27.818 LIB libspdk_bdev_delay.a 00:04:27.818 CC module/bdev/nvme/bdev_nvme.o 00:04:27.818 CC module/bdev/passthru/vbdev_passthru.o 00:04:27.818 SO libspdk_bdev_delay.so.6.0 00:04:27.818 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:27.818 CC module/bdev/raid/bdev_raid.o 00:04:27.818 SYMLINK libspdk_bdev_delay.so 00:04:27.818 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:28.075 LIB libspdk_bdev_null.a 00:04:28.075 CC module/bdev/split/vbdev_split.o 00:04:28.075 SO libspdk_bdev_null.so.6.0 00:04:28.075 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:28.075 SYMLINK libspdk_bdev_null.so 00:04:28.075 CC module/bdev/nvme/nvme_rpc.o 00:04:28.075 CC module/bdev/split/vbdev_split_rpc.o 00:04:28.075 CC module/bdev/raid/bdev_raid_rpc.o 00:04:28.075 LIB libspdk_bdev_passthru.a 00:04:28.075 LIB libspdk_bdev_malloc.a 00:04:28.075 SO libspdk_bdev_passthru.so.6.0 00:04:28.075 LIB libspdk_bdev_lvol.a 00:04:28.075 SO libspdk_bdev_malloc.so.6.0 00:04:28.075 SO libspdk_bdev_lvol.so.6.0 00:04:28.075 SYMLINK libspdk_bdev_passthru.so 00:04:28.075 CC module/bdev/raid/bdev_raid_sb.o 00:04:28.075 CC module/bdev/raid/raid0.o 00:04:28.075 LIB libspdk_bdev_split.a 00:04:28.075 SYMLINK libspdk_bdev_malloc.so 00:04:28.075 CC module/bdev/raid/raid1.o 00:04:28.332 SYMLINK libspdk_bdev_lvol.so 00:04:28.332 SO libspdk_bdev_split.so.6.0 00:04:28.332 SYMLINK libspdk_bdev_split.so 00:04:28.332 CC module/bdev/nvme/bdev_mdns_client.o 00:04:28.332 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:28.332 CC module/bdev/raid/concat.o 00:04:28.332 CC module/bdev/aio/bdev_aio.o 00:04:28.332 CC module/bdev/xnvme/bdev_xnvme.o 00:04:28.332 CC module/bdev/aio/bdev_aio_rpc.o 00:04:28.332 CC module/bdev/nvme/vbdev_opal.o 00:04:28.589 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:28.589 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:28.589 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:28.589 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:28.589 LIB libspdk_bdev_xnvme.a 00:04:28.589 SO libspdk_bdev_xnvme.so.3.0 00:04:28.846 SYMLINK libspdk_bdev_xnvme.so 00:04:28.846 LIB libspdk_bdev_aio.a 00:04:28.846 SO libspdk_bdev_aio.so.6.0 00:04:28.846 LIB libspdk_bdev_zone_block.a 00:04:28.846 CC module/bdev/ftl/bdev_ftl.o 00:04:28.846 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:28.846 SO libspdk_bdev_zone_block.so.6.0 00:04:28.846 SYMLINK libspdk_bdev_aio.so 00:04:28.846 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:28.846 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:28.847 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:28.847 CC module/bdev/iscsi/bdev_iscsi.o 00:04:28.847 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:28.847 SYMLINK libspdk_bdev_zone_block.so 00:04:29.104 LIB libspdk_bdev_raid.a 00:04:29.104 LIB libspdk_bdev_ftl.a 00:04:29.104 SO libspdk_bdev_raid.so.6.0 00:04:29.104 SO libspdk_bdev_ftl.so.6.0 00:04:29.104 SYMLINK libspdk_bdev_ftl.so 00:04:29.104 SYMLINK libspdk_bdev_raid.so 00:04:29.361 LIB libspdk_bdev_iscsi.a 00:04:29.361 SO libspdk_bdev_iscsi.so.6.0 00:04:29.361 LIB libspdk_bdev_virtio.a 00:04:29.361 SO libspdk_bdev_virtio.so.6.0 00:04:29.361 SYMLINK libspdk_bdev_iscsi.so 00:04:29.361 SYMLINK libspdk_bdev_virtio.so 00:04:30.295 LIB libspdk_bdev_nvme.a 00:04:30.295 SO libspdk_bdev_nvme.so.7.1 00:04:30.295 SYMLINK libspdk_bdev_nvme.so 00:04:30.862 CC module/event/subsystems/vmd/vmd.o 00:04:30.862 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:30.862 CC module/event/subsystems/sock/sock.o 00:04:30.862 CC module/event/subsystems/fsdev/fsdev.o 00:04:30.862 CC module/event/subsystems/keyring/keyring.o 00:04:30.862 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:30.862 CC module/event/subsystems/scheduler/scheduler.o 00:04:30.862 CC module/event/subsystems/iobuf/iobuf.o 00:04:30.862 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:30.862 LIB libspdk_event_vhost_blk.a 00:04:30.862 LIB libspdk_event_vmd.a 00:04:30.862 LIB libspdk_event_keyring.a 00:04:30.862 LIB libspdk_event_scheduler.a 00:04:30.862 LIB libspdk_event_fsdev.a 00:04:30.862 LIB libspdk_event_sock.a 00:04:30.862 SO libspdk_event_vhost_blk.so.3.0 00:04:30.862 LIB libspdk_event_iobuf.a 00:04:30.862 SO libspdk_event_vmd.so.6.0 00:04:30.862 SO libspdk_event_keyring.so.1.0 00:04:30.862 SO libspdk_event_scheduler.so.4.0 00:04:30.862 SO libspdk_event_sock.so.5.0 00:04:30.862 SO libspdk_event_fsdev.so.1.0 00:04:30.862 SO libspdk_event_iobuf.so.3.0 00:04:30.862 SYMLINK libspdk_event_keyring.so 00:04:30.862 SYMLINK libspdk_event_vhost_blk.so 00:04:30.862 SYMLINK libspdk_event_sock.so 00:04:30.862 SYMLINK libspdk_event_scheduler.so 00:04:30.862 SYMLINK libspdk_event_fsdev.so 00:04:30.862 SYMLINK libspdk_event_vmd.so 00:04:30.862 SYMLINK libspdk_event_iobuf.so 00:04:31.120 CC module/event/subsystems/accel/accel.o 00:04:31.378 LIB libspdk_event_accel.a 00:04:31.378 SO libspdk_event_accel.so.6.0 00:04:31.378 SYMLINK libspdk_event_accel.so 00:04:31.636 CC module/event/subsystems/bdev/bdev.o 00:04:31.894 LIB libspdk_event_bdev.a 00:04:31.894 SO libspdk_event_bdev.so.6.0 00:04:31.894 SYMLINK libspdk_event_bdev.so 00:04:32.153 CC module/event/subsystems/nbd/nbd.o 00:04:32.153 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:32.153 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:32.153 CC module/event/subsystems/scsi/scsi.o 00:04:32.153 CC module/event/subsystems/ublk/ublk.o 00:04:32.153 LIB libspdk_event_nbd.a 00:04:32.153 LIB libspdk_event_ublk.a 00:04:32.153 LIB libspdk_event_scsi.a 00:04:32.153 SO libspdk_event_nbd.so.6.0 00:04:32.153 SO libspdk_event_ublk.so.3.0 00:04:32.153 SO libspdk_event_scsi.so.6.0 00:04:32.153 SYMLINK libspdk_event_nbd.so 00:04:32.153 SYMLINK libspdk_event_scsi.so 00:04:32.153 SYMLINK libspdk_event_ublk.so 00:04:32.153 LIB libspdk_event_nvmf.a 00:04:32.153 SO libspdk_event_nvmf.so.6.0 00:04:32.410 SYMLINK libspdk_event_nvmf.so 00:04:32.410 CC module/event/subsystems/iscsi/iscsi.o 00:04:32.411 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:32.411 LIB libspdk_event_vhost_scsi.a 00:04:32.670 LIB libspdk_event_iscsi.a 00:04:32.670 SO libspdk_event_vhost_scsi.so.3.0 00:04:32.670 SO libspdk_event_iscsi.so.6.0 00:04:32.670 SYMLINK libspdk_event_vhost_scsi.so 00:04:32.670 SYMLINK libspdk_event_iscsi.so 00:04:32.670 SO libspdk.so.6.0 00:04:32.670 SYMLINK libspdk.so 00:04:32.940 CC test/rpc_client/rpc_client_test.o 00:04:32.940 TEST_HEADER include/spdk/accel.h 00:04:32.940 TEST_HEADER include/spdk/accel_module.h 00:04:32.940 TEST_HEADER include/spdk/assert.h 00:04:32.940 CXX app/trace/trace.o 00:04:32.940 TEST_HEADER include/spdk/barrier.h 00:04:32.940 TEST_HEADER include/spdk/base64.h 00:04:32.940 TEST_HEADER include/spdk/bdev.h 00:04:32.940 TEST_HEADER include/spdk/bdev_module.h 00:04:32.940 TEST_HEADER include/spdk/bdev_zone.h 00:04:32.940 TEST_HEADER include/spdk/bit_array.h 00:04:32.940 TEST_HEADER include/spdk/bit_pool.h 00:04:32.940 TEST_HEADER include/spdk/blob_bdev.h 00:04:32.940 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:32.940 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:32.940 TEST_HEADER include/spdk/blobfs.h 00:04:32.940 TEST_HEADER include/spdk/blob.h 00:04:32.940 TEST_HEADER include/spdk/conf.h 00:04:32.940 TEST_HEADER include/spdk/config.h 00:04:32.940 TEST_HEADER include/spdk/cpuset.h 00:04:32.940 TEST_HEADER include/spdk/crc16.h 00:04:32.940 TEST_HEADER include/spdk/crc32.h 00:04:32.940 TEST_HEADER include/spdk/crc64.h 00:04:32.940 TEST_HEADER include/spdk/dif.h 00:04:32.940 TEST_HEADER include/spdk/dma.h 00:04:32.940 TEST_HEADER include/spdk/endian.h 00:04:32.940 TEST_HEADER include/spdk/env_dpdk.h 00:04:32.940 TEST_HEADER include/spdk/env.h 00:04:32.940 TEST_HEADER include/spdk/event.h 00:04:32.940 TEST_HEADER include/spdk/fd_group.h 00:04:32.940 TEST_HEADER include/spdk/fd.h 00:04:32.940 CC examples/util/zipf/zipf.o 00:04:32.940 TEST_HEADER include/spdk/file.h 00:04:32.940 TEST_HEADER include/spdk/fsdev.h 00:04:32.940 TEST_HEADER include/spdk/fsdev_module.h 00:04:32.940 TEST_HEADER include/spdk/ftl.h 00:04:32.940 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:32.940 TEST_HEADER include/spdk/gpt_spec.h 00:04:32.940 TEST_HEADER include/spdk/hexlify.h 00:04:32.940 CC examples/ioat/perf/perf.o 00:04:32.940 CC test/thread/poller_perf/poller_perf.o 00:04:32.940 TEST_HEADER include/spdk/histogram_data.h 00:04:32.940 TEST_HEADER include/spdk/idxd.h 00:04:32.940 TEST_HEADER include/spdk/idxd_spec.h 00:04:32.940 TEST_HEADER include/spdk/init.h 00:04:32.940 TEST_HEADER include/spdk/ioat.h 00:04:32.940 TEST_HEADER include/spdk/ioat_spec.h 00:04:32.940 TEST_HEADER include/spdk/iscsi_spec.h 00:04:32.940 TEST_HEADER include/spdk/json.h 00:04:32.940 TEST_HEADER include/spdk/jsonrpc.h 00:04:32.940 TEST_HEADER include/spdk/keyring.h 00:04:32.940 TEST_HEADER include/spdk/keyring_module.h 00:04:32.940 TEST_HEADER include/spdk/likely.h 00:04:32.940 TEST_HEADER include/spdk/log.h 00:04:32.940 TEST_HEADER include/spdk/lvol.h 00:04:32.940 TEST_HEADER include/spdk/md5.h 00:04:32.940 TEST_HEADER include/spdk/memory.h 00:04:32.940 TEST_HEADER include/spdk/mmio.h 00:04:32.940 TEST_HEADER include/spdk/nbd.h 00:04:32.940 TEST_HEADER include/spdk/net.h 00:04:32.940 TEST_HEADER include/spdk/notify.h 00:04:32.940 TEST_HEADER include/spdk/nvme.h 00:04:32.940 TEST_HEADER include/spdk/nvme_intel.h 00:04:32.940 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:32.940 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:32.940 CC test/app/bdev_svc/bdev_svc.o 00:04:32.940 TEST_HEADER include/spdk/nvme_spec.h 00:04:32.940 TEST_HEADER include/spdk/nvme_zns.h 00:04:32.940 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:32.940 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:32.940 TEST_HEADER include/spdk/nvmf.h 00:04:32.940 TEST_HEADER include/spdk/nvmf_spec.h 00:04:32.940 TEST_HEADER include/spdk/nvmf_transport.h 00:04:32.940 TEST_HEADER include/spdk/opal.h 00:04:32.940 CC test/env/mem_callbacks/mem_callbacks.o 00:04:32.940 TEST_HEADER include/spdk/opal_spec.h 00:04:32.940 CC test/dma/test_dma/test_dma.o 00:04:32.940 TEST_HEADER include/spdk/pci_ids.h 00:04:32.940 TEST_HEADER include/spdk/pipe.h 00:04:32.940 TEST_HEADER include/spdk/queue.h 00:04:32.940 TEST_HEADER include/spdk/reduce.h 00:04:32.940 TEST_HEADER include/spdk/rpc.h 00:04:32.940 TEST_HEADER include/spdk/scheduler.h 00:04:32.940 TEST_HEADER include/spdk/scsi.h 00:04:32.940 TEST_HEADER include/spdk/scsi_spec.h 00:04:32.940 TEST_HEADER include/spdk/sock.h 00:04:32.940 TEST_HEADER include/spdk/stdinc.h 00:04:32.940 TEST_HEADER include/spdk/string.h 00:04:33.198 TEST_HEADER include/spdk/thread.h 00:04:33.198 TEST_HEADER include/spdk/trace.h 00:04:33.198 TEST_HEADER include/spdk/trace_parser.h 00:04:33.198 TEST_HEADER include/spdk/tree.h 00:04:33.198 TEST_HEADER include/spdk/ublk.h 00:04:33.198 TEST_HEADER include/spdk/util.h 00:04:33.198 TEST_HEADER include/spdk/uuid.h 00:04:33.198 TEST_HEADER include/spdk/version.h 00:04:33.198 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:33.198 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:33.198 TEST_HEADER include/spdk/vhost.h 00:04:33.198 TEST_HEADER include/spdk/vmd.h 00:04:33.198 TEST_HEADER include/spdk/xor.h 00:04:33.198 TEST_HEADER include/spdk/zipf.h 00:04:33.198 CXX test/cpp_headers/accel.o 00:04:33.198 LINK rpc_client_test 00:04:33.198 LINK zipf 00:04:33.198 LINK poller_perf 00:04:33.198 LINK interrupt_tgt 00:04:33.198 LINK bdev_svc 00:04:33.198 LINK ioat_perf 00:04:33.198 CXX test/cpp_headers/accel_module.o 00:04:33.198 LINK spdk_trace 00:04:33.198 CC examples/ioat/verify/verify.o 00:04:33.456 CC app/trace_record/trace_record.o 00:04:33.456 CXX test/cpp_headers/assert.o 00:04:33.456 CC app/nvmf_tgt/nvmf_main.o 00:04:33.456 LINK mem_callbacks 00:04:33.456 CC app/iscsi_tgt/iscsi_tgt.o 00:04:33.456 CXX test/cpp_headers/barrier.o 00:04:33.456 CC test/env/vtophys/vtophys.o 00:04:33.456 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:33.456 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:33.456 CXX test/cpp_headers/base64.o 00:04:33.456 LINK verify 00:04:33.456 LINK test_dma 00:04:33.456 LINK spdk_trace_record 00:04:33.714 LINK nvmf_tgt 00:04:33.714 LINK vtophys 00:04:33.714 LINK iscsi_tgt 00:04:33.714 CXX test/cpp_headers/bdev.o 00:04:33.714 CC test/event/event_perf/event_perf.o 00:04:33.714 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:33.714 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:33.714 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:33.714 LINK event_perf 00:04:33.973 CC app/spdk_tgt/spdk_tgt.o 00:04:33.973 CXX test/cpp_headers/bdev_module.o 00:04:33.973 CC test/event/reactor/reactor.o 00:04:33.973 CC examples/thread/thread/thread_ex.o 00:04:33.973 CXX test/cpp_headers/bdev_zone.o 00:04:33.973 CXX test/cpp_headers/bit_array.o 00:04:33.973 LINK env_dpdk_post_init 00:04:33.973 LINK nvme_fuzz 00:04:33.973 LINK reactor 00:04:33.973 CXX test/cpp_headers/bit_pool.o 00:04:33.973 LINK spdk_tgt 00:04:34.232 LINK thread 00:04:34.232 CC test/event/reactor_perf/reactor_perf.o 00:04:34.232 CC test/env/memory/memory_ut.o 00:04:34.232 CC test/app/histogram_perf/histogram_perf.o 00:04:34.232 CC test/app/jsoncat/jsoncat.o 00:04:34.232 CXX test/cpp_headers/blob_bdev.o 00:04:34.232 CC app/spdk_lspci/spdk_lspci.o 00:04:34.232 LINK vhost_fuzz 00:04:34.232 CXX test/cpp_headers/blobfs_bdev.o 00:04:34.232 LINK reactor_perf 00:04:34.232 LINK jsoncat 00:04:34.232 LINK histogram_perf 00:04:34.232 CXX test/cpp_headers/blobfs.o 00:04:34.489 LINK spdk_lspci 00:04:34.489 CC examples/sock/hello_world/hello_sock.o 00:04:34.489 CXX test/cpp_headers/blob.o 00:04:34.489 CC test/event/app_repeat/app_repeat.o 00:04:34.489 CC app/spdk_nvme_perf/perf.o 00:04:34.489 CXX test/cpp_headers/conf.o 00:04:34.489 CC app/spdk_nvme_identify/identify.o 00:04:34.489 CC app/spdk_nvme_discover/discovery_aer.o 00:04:34.489 CC test/accel/dif/dif.o 00:04:34.747 CXX test/cpp_headers/config.o 00:04:34.747 LINK app_repeat 00:04:34.747 CXX test/cpp_headers/cpuset.o 00:04:34.747 LINK hello_sock 00:04:34.747 CC test/event/scheduler/scheduler.o 00:04:34.747 LINK spdk_nvme_discover 00:04:34.747 CXX test/cpp_headers/crc16.o 00:04:34.747 LINK scheduler 00:04:35.005 CC test/env/pci/pci_ut.o 00:04:35.005 CXX test/cpp_headers/crc32.o 00:04:35.005 CC examples/vmd/lsvmd/lsvmd.o 00:04:35.005 LINK lsvmd 00:04:35.005 CC test/blobfs/mkfs/mkfs.o 00:04:35.005 CXX test/cpp_headers/crc64.o 00:04:35.005 LINK iscsi_fuzz 00:04:35.005 CC examples/vmd/led/led.o 00:04:35.264 CXX test/cpp_headers/dif.o 00:04:35.264 CXX test/cpp_headers/dma.o 00:04:35.264 LINK mkfs 00:04:35.264 LINK memory_ut 00:04:35.264 LINK dif 00:04:35.264 LINK led 00:04:35.264 LINK pci_ut 00:04:35.264 CC test/app/stub/stub.o 00:04:35.264 CXX test/cpp_headers/endian.o 00:04:35.264 LINK spdk_nvme_identify 00:04:35.264 LINK spdk_nvme_perf 00:04:35.522 CXX test/cpp_headers/env_dpdk.o 00:04:35.522 CXX test/cpp_headers/env.o 00:04:35.522 CXX test/cpp_headers/event.o 00:04:35.522 LINK stub 00:04:35.522 CC test/lvol/esnap/esnap.o 00:04:35.522 CXX test/cpp_headers/fd_group.o 00:04:35.522 CC app/spdk_top/spdk_top.o 00:04:35.522 CC examples/idxd/perf/perf.o 00:04:35.522 CC app/vhost/vhost.o 00:04:35.522 CXX test/cpp_headers/fd.o 00:04:35.780 CC test/nvme/aer/aer.o 00:04:35.780 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:35.780 CC examples/accel/perf/accel_perf.o 00:04:35.780 CXX test/cpp_headers/file.o 00:04:35.780 CC test/bdev/bdevio/bdevio.o 00:04:35.780 LINK vhost 00:04:35.780 CC examples/blob/hello_world/hello_blob.o 00:04:35.780 LINK hello_fsdev 00:04:36.039 LINK idxd_perf 00:04:36.039 CXX test/cpp_headers/fsdev.o 00:04:36.039 LINK aer 00:04:36.039 CXX test/cpp_headers/fsdev_module.o 00:04:36.039 CC examples/blob/cli/blobcli.o 00:04:36.039 LINK hello_blob 00:04:36.039 CC test/nvme/reset/reset.o 00:04:36.039 LINK bdevio 00:04:36.039 CXX test/cpp_headers/ftl.o 00:04:36.299 CC app/spdk_dd/spdk_dd.o 00:04:36.299 LINK accel_perf 00:04:36.299 CXX test/cpp_headers/fuse_dispatcher.o 00:04:36.299 CC app/fio/nvme/fio_plugin.o 00:04:36.299 CXX test/cpp_headers/gpt_spec.o 00:04:36.299 LINK reset 00:04:36.299 CC app/fio/bdev/fio_plugin.o 00:04:36.556 CXX test/cpp_headers/hexlify.o 00:04:36.556 CC test/nvme/sgl/sgl.o 00:04:36.556 LINK spdk_dd 00:04:36.556 LINK spdk_top 00:04:36.556 LINK blobcli 00:04:36.556 CC examples/nvme/hello_world/hello_world.o 00:04:36.556 CC test/nvme/e2edp/nvme_dp.o 00:04:36.556 CXX test/cpp_headers/histogram_data.o 00:04:36.556 CXX test/cpp_headers/idxd.o 00:04:36.556 CXX test/cpp_headers/idxd_spec.o 00:04:36.814 CXX test/cpp_headers/init.o 00:04:36.814 LINK sgl 00:04:36.814 LINK hello_world 00:04:36.814 LINK nvme_dp 00:04:36.814 CXX test/cpp_headers/ioat.o 00:04:36.814 CC test/nvme/overhead/overhead.o 00:04:36.814 CC test/nvme/err_injection/err_injection.o 00:04:36.814 CC test/nvme/startup/startup.o 00:04:36.814 LINK spdk_bdev 00:04:36.814 LINK spdk_nvme 00:04:36.814 CXX test/cpp_headers/ioat_spec.o 00:04:37.072 CC test/nvme/reserve/reserve.o 00:04:37.072 CC test/nvme/simple_copy/simple_copy.o 00:04:37.072 CXX test/cpp_headers/iscsi_spec.o 00:04:37.072 CC examples/nvme/reconnect/reconnect.o 00:04:37.072 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:37.072 LINK err_injection 00:04:37.072 LINK startup 00:04:37.072 CC examples/nvme/arbitration/arbitration.o 00:04:37.072 LINK overhead 00:04:37.072 CXX test/cpp_headers/json.o 00:04:37.072 LINK reserve 00:04:37.072 LINK simple_copy 00:04:37.072 CXX test/cpp_headers/jsonrpc.o 00:04:37.330 CXX test/cpp_headers/keyring.o 00:04:37.330 CXX test/cpp_headers/keyring_module.o 00:04:37.330 CXX test/cpp_headers/likely.o 00:04:37.330 CXX test/cpp_headers/log.o 00:04:37.330 CXX test/cpp_headers/lvol.o 00:04:37.330 CC test/nvme/connect_stress/connect_stress.o 00:04:37.330 LINK arbitration 00:04:37.330 LINK reconnect 00:04:37.330 CC examples/nvme/hotplug/hotplug.o 00:04:37.594 CC test/nvme/boot_partition/boot_partition.o 00:04:37.594 CXX test/cpp_headers/md5.o 00:04:37.594 CC test/nvme/compliance/nvme_compliance.o 00:04:37.594 LINK connect_stress 00:04:37.594 CC examples/bdev/hello_world/hello_bdev.o 00:04:37.594 CC examples/bdev/bdevperf/bdevperf.o 00:04:37.594 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:37.594 LINK nvme_manage 00:04:37.594 LINK boot_partition 00:04:37.594 LINK hotplug 00:04:37.865 CXX test/cpp_headers/memory.o 00:04:37.865 CC examples/nvme/abort/abort.o 00:04:37.865 CXX test/cpp_headers/mmio.o 00:04:37.865 CXX test/cpp_headers/nbd.o 00:04:37.865 LINK cmb_copy 00:04:37.865 LINK hello_bdev 00:04:37.865 CXX test/cpp_headers/net.o 00:04:37.865 LINK nvme_compliance 00:04:37.865 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:37.865 CC test/nvme/fused_ordering/fused_ordering.o 00:04:37.865 CXX test/cpp_headers/notify.o 00:04:37.865 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:38.124 CXX test/cpp_headers/nvme.o 00:04:38.124 CC test/nvme/fdp/fdp.o 00:04:38.124 CXX test/cpp_headers/nvme_intel.o 00:04:38.124 CC test/nvme/cuse/cuse.o 00:04:38.124 LINK pmr_persistence 00:04:38.124 LINK fused_ordering 00:04:38.124 LINK abort 00:04:38.124 LINK doorbell_aers 00:04:38.124 CXX test/cpp_headers/nvme_ocssd.o 00:04:38.124 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:38.124 CXX test/cpp_headers/nvme_spec.o 00:04:38.397 CXX test/cpp_headers/nvme_zns.o 00:04:38.397 CXX test/cpp_headers/nvmf_cmd.o 00:04:38.397 LINK fdp 00:04:38.397 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:38.397 CXX test/cpp_headers/nvmf.o 00:04:38.397 CXX test/cpp_headers/nvmf_spec.o 00:04:38.397 CXX test/cpp_headers/nvmf_transport.o 00:04:38.397 LINK bdevperf 00:04:38.397 CXX test/cpp_headers/opal.o 00:04:38.397 CXX test/cpp_headers/opal_spec.o 00:04:38.397 CXX test/cpp_headers/pci_ids.o 00:04:38.397 CXX test/cpp_headers/pipe.o 00:04:38.397 CXX test/cpp_headers/queue.o 00:04:38.656 CXX test/cpp_headers/reduce.o 00:04:38.656 CXX test/cpp_headers/rpc.o 00:04:38.656 CXX test/cpp_headers/scheduler.o 00:04:38.656 CXX test/cpp_headers/scsi.o 00:04:38.656 CXX test/cpp_headers/scsi_spec.o 00:04:38.656 CXX test/cpp_headers/sock.o 00:04:38.656 CXX test/cpp_headers/stdinc.o 00:04:38.656 CXX test/cpp_headers/string.o 00:04:38.656 CXX test/cpp_headers/thread.o 00:04:38.656 CXX test/cpp_headers/trace.o 00:04:38.656 CXX test/cpp_headers/trace_parser.o 00:04:38.656 CXX test/cpp_headers/tree.o 00:04:38.656 CXX test/cpp_headers/ublk.o 00:04:38.656 CXX test/cpp_headers/util.o 00:04:38.656 CXX test/cpp_headers/uuid.o 00:04:38.914 CC examples/nvmf/nvmf/nvmf.o 00:04:38.914 CXX test/cpp_headers/version.o 00:04:38.914 CXX test/cpp_headers/vfio_user_pci.o 00:04:38.914 CXX test/cpp_headers/vfio_user_spec.o 00:04:38.914 CXX test/cpp_headers/vhost.o 00:04:38.914 CXX test/cpp_headers/vmd.o 00:04:38.914 CXX test/cpp_headers/xor.o 00:04:38.914 CXX test/cpp_headers/zipf.o 00:04:39.173 LINK nvmf 00:04:39.173 LINK cuse 00:04:40.546 LINK esnap 00:04:41.110 00:04:41.110 real 1m9.506s 00:04:41.110 user 6m29.423s 00:04:41.110 sys 1m7.482s 00:04:41.110 03:55:28 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:41.110 ************************************ 00:04:41.110 03:55:28 make -- common/autotest_common.sh@10 -- $ set +x 00:04:41.110 END TEST make 00:04:41.110 ************************************ 00:04:41.110 03:55:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:41.110 03:55:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:41.110 03:55:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:41.110 03:55:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.110 03:55:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:41.110 03:55:28 -- pm/common@44 -- $ pid=5070 00:04:41.110 03:55:28 -- pm/common@50 -- $ kill -TERM 5070 00:04:41.110 03:55:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.110 03:55:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:41.110 03:55:28 -- pm/common@44 -- $ pid=5071 00:04:41.110 03:55:28 -- pm/common@50 -- $ kill -TERM 5071 00:04:41.110 03:55:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:41.110 03:55:28 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:41.110 03:55:28 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.110 03:55:28 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.110 03:55:28 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.110 03:55:28 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.110 03:55:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.110 03:55:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.110 03:55:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.110 03:55:28 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.110 03:55:28 -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.110 03:55:28 -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.110 03:55:28 -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.110 03:55:28 -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.110 03:55:28 -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.110 03:55:28 -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.110 03:55:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.110 03:55:28 -- scripts/common.sh@344 -- # case "$op" in 00:04:41.110 03:55:28 -- scripts/common.sh@345 -- # : 1 00:04:41.110 03:55:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.110 03:55:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.110 03:55:28 -- scripts/common.sh@365 -- # decimal 1 00:04:41.110 03:55:28 -- scripts/common.sh@353 -- # local d=1 00:04:41.110 03:55:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.110 03:55:28 -- scripts/common.sh@355 -- # echo 1 00:04:41.110 03:55:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.110 03:55:28 -- scripts/common.sh@366 -- # decimal 2 00:04:41.110 03:55:28 -- scripts/common.sh@353 -- # local d=2 00:04:41.110 03:55:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.110 03:55:28 -- scripts/common.sh@355 -- # echo 2 00:04:41.110 03:55:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.110 03:55:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.110 03:55:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.110 03:55:28 -- scripts/common.sh@368 -- # return 0 00:04:41.110 03:55:28 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.110 03:55:28 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.110 --rc genhtml_branch_coverage=1 00:04:41.110 --rc genhtml_function_coverage=1 00:04:41.110 --rc genhtml_legend=1 00:04:41.110 --rc geninfo_all_blocks=1 00:04:41.110 --rc geninfo_unexecuted_blocks=1 00:04:41.110 00:04:41.110 ' 00:04:41.110 03:55:28 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.110 --rc genhtml_branch_coverage=1 00:04:41.110 --rc genhtml_function_coverage=1 00:04:41.110 --rc genhtml_legend=1 00:04:41.110 --rc geninfo_all_blocks=1 00:04:41.111 --rc geninfo_unexecuted_blocks=1 00:04:41.111 00:04:41.111 ' 00:04:41.111 03:55:28 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.111 --rc genhtml_branch_coverage=1 00:04:41.111 --rc genhtml_function_coverage=1 00:04:41.111 --rc genhtml_legend=1 00:04:41.111 --rc geninfo_all_blocks=1 00:04:41.111 --rc geninfo_unexecuted_blocks=1 00:04:41.111 00:04:41.111 ' 00:04:41.111 03:55:28 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.111 --rc genhtml_branch_coverage=1 00:04:41.111 --rc genhtml_function_coverage=1 00:04:41.111 --rc genhtml_legend=1 00:04:41.111 --rc geninfo_all_blocks=1 00:04:41.111 --rc geninfo_unexecuted_blocks=1 00:04:41.111 00:04:41.111 ' 00:04:41.111 03:55:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.111 03:55:28 -- nvmf/common.sh@7 -- # uname -s 00:04:41.111 03:55:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.111 03:55:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.111 03:55:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.111 03:55:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.111 03:55:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.111 03:55:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.111 03:55:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.111 03:55:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.111 03:55:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.111 03:55:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.111 03:55:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:74b81f80-223e-4515-b804-645729820039 00:04:41.111 03:55:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=74b81f80-223e-4515-b804-645729820039 00:04:41.111 03:55:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.111 03:55:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.111 03:55:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.111 03:55:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.111 03:55:28 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.111 03:55:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.111 03:55:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.111 03:55:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.111 03:55:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.111 03:55:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.111 03:55:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.111 03:55:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.111 03:55:28 -- paths/export.sh@5 -- # export PATH 00:04:41.111 03:55:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.111 03:55:28 -- nvmf/common.sh@51 -- # : 0 00:04:41.111 03:55:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.111 03:55:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.111 03:55:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.111 03:55:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.111 03:55:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.111 03:55:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.111 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.111 03:55:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.111 03:55:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.111 03:55:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.111 03:55:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:41.111 03:55:28 -- spdk/autotest.sh@32 -- # uname -s 00:04:41.111 03:55:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:41.111 03:55:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:41.111 03:55:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.111 03:55:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:41.111 03:55:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.111 03:55:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:41.368 03:55:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:41.368 03:55:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:41.368 03:55:28 -- spdk/autotest.sh@48 -- # udevadm_pid=54257 00:04:41.368 03:55:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:41.368 03:55:28 -- pm/common@17 -- # local monitor 00:04:41.368 03:55:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.368 03:55:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:41.368 03:55:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.368 03:55:28 -- pm/common@25 -- # sleep 1 00:04:41.368 03:55:28 -- pm/common@21 -- # date +%s 00:04:41.368 03:55:28 -- pm/common@21 -- # date +%s 00:04:41.368 03:55:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733457328 00:04:41.368 03:55:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733457328 00:04:41.368 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733457328_collect-cpu-load.pm.log 00:04:41.368 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733457328_collect-vmstat.pm.log 00:04:42.300 03:55:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:42.300 03:55:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:42.300 03:55:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.300 03:55:29 -- common/autotest_common.sh@10 -- # set +x 00:04:42.300 03:55:29 -- spdk/autotest.sh@59 -- # create_test_list 00:04:42.300 03:55:29 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:42.300 03:55:29 -- common/autotest_common.sh@10 -- # set +x 00:04:42.300 03:55:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:42.300 03:55:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:42.300 03:55:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:42.300 03:55:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:42.300 03:55:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:42.300 03:55:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:42.300 03:55:29 -- common/autotest_common.sh@1457 -- # uname 00:04:42.300 03:55:29 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:42.300 03:55:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:42.300 03:55:29 -- common/autotest_common.sh@1477 -- # uname 00:04:42.300 03:55:29 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:42.300 03:55:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:42.300 03:55:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:42.300 lcov: LCOV version 1.15 00:04:42.300 03:55:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:57.370 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:57.370 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:12.240 03:55:58 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:12.240 03:55:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.240 03:55:58 -- common/autotest_common.sh@10 -- # set +x 00:05:12.240 03:55:58 -- spdk/autotest.sh@78 -- # rm -f 00:05:12.240 03:55:58 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.240 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.240 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:12.240 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:12.240 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:12.240 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:12.240 03:55:58 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:12.240 03:55:58 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:12.240 03:55:58 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:12.240 03:55:58 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:12.240 03:55:58 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:12.240 03:55:58 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:12.240 03:55:58 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:12.240 03:55:58 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:12.240 03:55:58 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:12.240 03:55:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:12.240 03:55:58 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:12.240 03:55:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:12.240 03:55:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:12.240 03:55:58 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:12.240 03:55:58 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:12.240 03:55:58 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:12.240 03:55:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:12.240 03:55:58 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:12.240 03:55:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:12.240 03:55:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:12.240 03:55:58 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:12.240 03:55:58 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:12.240 03:55:58 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:12.240 03:55:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:12.240 03:55:58 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:12.240 03:55:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:12.240 03:55:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:12.240 03:55:58 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:12.240 03:55:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:05:12.240 03:55:58 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:12.240 03:55:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:12.240 03:55:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:12.240 03:55:58 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:12.240 03:55:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:05:12.240 03:55:58 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:12.240 03:55:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:12.240 03:55:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:12.240 03:55:58 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:12.240 03:55:58 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:12.240 03:55:58 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:12.240 03:55:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:05:12.240 03:55:58 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:12.240 03:55:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:12.240 03:55:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:12.240 03:55:58 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:12.240 03:55:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:12.240 03:55:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:12.240 03:55:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:12.240 03:55:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:12.240 03:55:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:12.240 No valid GPT data, bailing 00:05:12.240 03:55:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:12.240 03:55:59 -- scripts/common.sh@394 -- # pt= 00:05:12.240 03:55:59 -- scripts/common.sh@395 -- # return 1 00:05:12.240 03:55:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:12.240 1+0 records in 00:05:12.240 1+0 records out 00:05:12.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128551 s, 81.6 MB/s 00:05:12.240 03:55:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:12.240 03:55:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:12.241 03:55:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:12.241 03:55:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:12.241 03:55:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:12.241 No valid GPT data, bailing 00:05:12.241 03:55:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:12.241 03:55:59 -- scripts/common.sh@394 -- # pt= 00:05:12.241 03:55:59 -- scripts/common.sh@395 -- # return 1 00:05:12.241 03:55:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:12.241 1+0 records in 00:05:12.241 1+0 records out 00:05:12.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00587945 s, 178 MB/s 00:05:12.241 03:55:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:12.241 03:55:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:12.241 03:55:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:12.241 03:55:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:12.241 03:55:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:12.241 No valid GPT data, bailing 00:05:12.241 03:55:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:12.241 03:55:59 -- scripts/common.sh@394 -- # pt= 00:05:12.241 03:55:59 -- scripts/common.sh@395 -- # return 1 00:05:12.241 03:55:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:12.241 1+0 records in 00:05:12.241 1+0 records out 00:05:12.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437697 s, 240 MB/s 00:05:12.241 03:55:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:12.241 03:55:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:12.241 03:55:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:12.241 03:55:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:12.241 03:55:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:12.241 No valid GPT data, bailing 00:05:12.241 03:55:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:12.241 03:55:59 -- scripts/common.sh@394 -- # pt= 00:05:12.241 03:55:59 -- scripts/common.sh@395 -- # return 1 00:05:12.241 03:55:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:12.241 1+0 records in 00:05:12.241 1+0 records out 00:05:12.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00342052 s, 307 MB/s 00:05:12.241 03:55:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:12.241 03:55:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:12.241 03:55:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:12.241 03:55:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:12.241 03:55:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:12.241 No valid GPT data, bailing 00:05:12.241 03:55:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:12.241 03:55:59 -- scripts/common.sh@394 -- # pt= 00:05:12.241 03:55:59 -- scripts/common.sh@395 -- # return 1 00:05:12.241 03:55:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:12.241 1+0 records in 00:05:12.241 1+0 records out 00:05:12.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00299184 s, 350 MB/s 00:05:12.241 03:55:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:12.241 03:55:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:12.241 03:55:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:12.241 03:55:59 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:12.241 03:55:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:12.241 No valid GPT data, bailing 00:05:12.241 03:55:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:12.241 03:55:59 -- scripts/common.sh@394 -- # pt= 00:05:12.241 03:55:59 -- scripts/common.sh@395 -- # return 1 00:05:12.241 03:55:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:12.241 1+0 records in 00:05:12.241 1+0 records out 00:05:12.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00355878 s, 295 MB/s 00:05:12.241 03:55:59 -- spdk/autotest.sh@105 -- # sync 00:05:12.499 03:55:59 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:12.499 03:55:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:12.499 03:55:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:14.402 03:56:01 -- spdk/autotest.sh@111 -- # uname -s 00:05:14.402 03:56:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:14.402 03:56:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:14.402 03:56:01 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:14.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.660 Hugepages 00:05:14.660 node hugesize free / total 00:05:14.660 node0 1048576kB 0 / 0 00:05:14.660 node0 2048kB 0 / 0 00:05:14.660 00:05:14.660 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:14.918 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:14.918 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:14.918 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:14.918 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:14.918 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:14.918 03:56:02 -- spdk/autotest.sh@117 -- # uname -s 00:05:14.918 03:56:02 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:14.918 03:56:02 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:14.918 03:56:02 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.484 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.053 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.053 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.053 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.053 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.053 03:56:03 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:16.989 03:56:04 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:16.989 03:56:04 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:16.989 03:56:04 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:16.989 03:56:04 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:16.989 03:56:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:16.989 03:56:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:16.989 03:56:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.989 03:56:04 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:16.989 03:56:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:16.989 03:56:04 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:16.989 03:56:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:16.989 03:56:04 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.247 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.505 Waiting for block devices as requested 00:05:17.505 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.505 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.505 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.764 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:23.059 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:23.059 03:56:10 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.059 03:56:10 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:23.059 03:56:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.059 03:56:10 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:23.059 03:56:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:23.059 03:56:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:23.059 03:56:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:23.059 03:56:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:23.059 03:56:10 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:23.059 03:56:10 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:23.059 03:56:10 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.059 03:56:10 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:23.059 03:56:10 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.059 03:56:10 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.059 03:56:10 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.059 03:56:10 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.059 03:56:10 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:23.059 03:56:10 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.059 03:56:10 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.059 03:56:10 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.059 03:56:10 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.059 03:56:10 -- common/autotest_common.sh@1543 -- # continue 00:05:23.059 03:56:10 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.059 03:56:10 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:23.059 03:56:10 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:23.059 03:56:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.059 03:56:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:23.059 03:56:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:23.059 03:56:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:23.059 03:56:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:23.059 03:56:10 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:23.059 03:56:10 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:23.059 03:56:10 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:23.059 03:56:10 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.059 03:56:10 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.059 03:56:10 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.059 03:56:10 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.059 03:56:10 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.059 03:56:10 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:23.059 03:56:10 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.059 03:56:10 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.059 03:56:10 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.059 03:56:10 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.059 03:56:10 -- common/autotest_common.sh@1543 -- # continue 00:05:23.060 03:56:10 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.060 03:56:10 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:23.060 03:56:10 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:23.060 03:56:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.060 03:56:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:23.060 03:56:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:23.060 03:56:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:23.060 03:56:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:23.060 03:56:10 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:23.060 03:56:10 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:23.060 03:56:10 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.060 03:56:10 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:23.060 03:56:10 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.060 03:56:10 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.060 03:56:10 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.060 03:56:10 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.060 03:56:10 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.060 03:56:10 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.060 03:56:10 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:23.060 03:56:10 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.060 03:56:10 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.060 03:56:10 -- common/autotest_common.sh@1543 -- # continue 00:05:23.060 03:56:10 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.060 03:56:10 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:23.060 03:56:10 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:23.060 03:56:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.060 03:56:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:23.060 03:56:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:23.060 03:56:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:23.060 03:56:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:23.060 03:56:10 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:23.060 03:56:10 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:23.060 03:56:10 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:23.060 03:56:10 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.060 03:56:10 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.060 03:56:10 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.060 03:56:10 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.060 03:56:10 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.060 03:56:10 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.060 03:56:10 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:23.060 03:56:10 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.060 03:56:10 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.060 03:56:10 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.060 03:56:10 -- common/autotest_common.sh@1543 -- # continue 00:05:23.060 03:56:10 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:23.060 03:56:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.060 03:56:10 -- common/autotest_common.sh@10 -- # set +x 00:05:23.060 03:56:10 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:23.060 03:56:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.060 03:56:10 -- common/autotest_common.sh@10 -- # set +x 00:05:23.060 03:56:10 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.319 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.885 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.885 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.885 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.885 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.885 03:56:11 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:23.885 03:56:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.885 03:56:11 -- common/autotest_common.sh@10 -- # set +x 00:05:23.885 03:56:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:23.885 03:56:11 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:23.885 03:56:11 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:23.885 03:56:11 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:23.885 03:56:11 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:23.885 03:56:11 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:23.885 03:56:11 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:23.885 03:56:11 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:23.885 03:56:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:23.885 03:56:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:23.885 03:56:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.885 03:56:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:23.885 03:56:11 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:23.885 03:56:11 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:23.885 03:56:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:23.885 03:56:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:23.885 03:56:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:23.885 03:56:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:23.885 03:56:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.885 03:56:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:23.885 03:56:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:23.885 03:56:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:23.885 03:56:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.885 03:56:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:23.885 03:56:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:23.885 03:56:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:23.885 03:56:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.885 03:56:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:23.885 03:56:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:23.885 03:56:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:23.885 03:56:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.885 03:56:11 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:23.885 03:56:11 -- common/autotest_common.sh@1572 -- # return 0 00:05:23.885 03:56:11 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:23.885 03:56:11 -- common/autotest_common.sh@1580 -- # return 0 00:05:23.885 03:56:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:23.885 03:56:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:23.885 03:56:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:23.885 03:56:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:23.885 03:56:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:23.886 03:56:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.886 03:56:11 -- common/autotest_common.sh@10 -- # set +x 00:05:23.886 03:56:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:23.886 03:56:11 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:23.886 03:56:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.886 03:56:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.886 03:56:11 -- common/autotest_common.sh@10 -- # set +x 00:05:23.886 ************************************ 00:05:23.886 START TEST env 00:05:23.886 ************************************ 00:05:23.886 03:56:11 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:24.145 * Looking for test storage... 00:05:24.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:24.145 03:56:11 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.145 03:56:11 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.145 03:56:11 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.145 03:56:11 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.145 03:56:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.145 03:56:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.145 03:56:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.145 03:56:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.145 03:56:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.145 03:56:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.145 03:56:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.145 03:56:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.145 03:56:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.145 03:56:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.145 03:56:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.145 03:56:11 env -- scripts/common.sh@344 -- # case "$op" in 00:05:24.145 03:56:11 env -- scripts/common.sh@345 -- # : 1 00:05:24.145 03:56:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.145 03:56:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.145 03:56:11 env -- scripts/common.sh@365 -- # decimal 1 00:05:24.145 03:56:11 env -- scripts/common.sh@353 -- # local d=1 00:05:24.145 03:56:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.145 03:56:11 env -- scripts/common.sh@355 -- # echo 1 00:05:24.145 03:56:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.145 03:56:11 env -- scripts/common.sh@366 -- # decimal 2 00:05:24.145 03:56:11 env -- scripts/common.sh@353 -- # local d=2 00:05:24.145 03:56:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.145 03:56:11 env -- scripts/common.sh@355 -- # echo 2 00:05:24.145 03:56:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.145 03:56:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.145 03:56:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.145 03:56:11 env -- scripts/common.sh@368 -- # return 0 00:05:24.145 03:56:11 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.145 03:56:11 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.145 --rc genhtml_branch_coverage=1 00:05:24.145 --rc genhtml_function_coverage=1 00:05:24.145 --rc genhtml_legend=1 00:05:24.145 --rc geninfo_all_blocks=1 00:05:24.145 --rc geninfo_unexecuted_blocks=1 00:05:24.145 00:05:24.145 ' 00:05:24.145 03:56:11 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.145 --rc genhtml_branch_coverage=1 00:05:24.145 --rc genhtml_function_coverage=1 00:05:24.145 --rc genhtml_legend=1 00:05:24.145 --rc geninfo_all_blocks=1 00:05:24.145 --rc geninfo_unexecuted_blocks=1 00:05:24.145 00:05:24.145 ' 00:05:24.145 03:56:11 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.145 --rc genhtml_branch_coverage=1 00:05:24.145 --rc genhtml_function_coverage=1 00:05:24.145 --rc genhtml_legend=1 00:05:24.145 --rc geninfo_all_blocks=1 00:05:24.145 --rc geninfo_unexecuted_blocks=1 00:05:24.145 00:05:24.145 ' 00:05:24.145 03:56:11 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.145 --rc genhtml_branch_coverage=1 00:05:24.145 --rc genhtml_function_coverage=1 00:05:24.145 --rc genhtml_legend=1 00:05:24.145 --rc geninfo_all_blocks=1 00:05:24.145 --rc geninfo_unexecuted_blocks=1 00:05:24.145 00:05:24.145 ' 00:05:24.145 03:56:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:24.145 03:56:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.145 03:56:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.145 03:56:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.145 ************************************ 00:05:24.145 START TEST env_memory 00:05:24.145 ************************************ 00:05:24.146 03:56:11 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:24.146 00:05:24.146 00:05:24.146 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.146 http://cunit.sourceforge.net/ 00:05:24.146 00:05:24.146 00:05:24.146 Suite: memory 00:05:24.146 Test: alloc and free memory map ...[2024-12-06 03:56:11.556226] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:24.146 passed 00:05:24.146 Test: mem map translation ...[2024-12-06 03:56:11.595465] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:24.146 [2024-12-06 03:56:11.595528] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:24.146 [2024-12-06 03:56:11.595589] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:24.146 [2024-12-06 03:56:11.595605] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:24.146 passed 00:05:24.146 Test: mem map registration ...[2024-12-06 03:56:11.663729] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:24.146 [2024-12-06 03:56:11.663778] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:24.405 passed 00:05:24.405 Test: mem map adjacent registrations ...passed 00:05:24.405 00:05:24.405 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.405 suites 1 1 n/a 0 0 00:05:24.405 tests 4 4 4 0 0 00:05:24.405 asserts 152 152 152 0 n/a 00:05:24.405 00:05:24.405 Elapsed time = 0.235 seconds 00:05:24.405 00:05:24.405 real 0m0.268s 00:05:24.405 user 0m0.241s 00:05:24.405 sys 0m0.021s 00:05:24.405 03:56:11 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.405 03:56:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:24.405 ************************************ 00:05:24.405 END TEST env_memory 00:05:24.405 ************************************ 00:05:24.405 03:56:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:24.405 03:56:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.405 03:56:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.405 03:56:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.405 ************************************ 00:05:24.405 START TEST env_vtophys 00:05:24.405 ************************************ 00:05:24.405 03:56:11 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:24.405 EAL: lib.eal log level changed from notice to debug 00:05:24.405 EAL: Detected lcore 0 as core 0 on socket 0 00:05:24.405 EAL: Detected lcore 1 as core 0 on socket 0 00:05:24.405 EAL: Detected lcore 2 as core 0 on socket 0 00:05:24.405 EAL: Detected lcore 3 as core 0 on socket 0 00:05:24.405 EAL: Detected lcore 4 as core 0 on socket 0 00:05:24.405 EAL: Detected lcore 5 as core 0 on socket 0 00:05:24.405 EAL: Detected lcore 6 as core 0 on socket 0 00:05:24.405 EAL: Detected lcore 7 as core 0 on socket 0 00:05:24.405 EAL: Detected lcore 8 as core 0 on socket 0 00:05:24.405 EAL: Detected lcore 9 as core 0 on socket 0 00:05:24.405 EAL: Maximum logical cores by configuration: 128 00:05:24.405 EAL: Detected CPU lcores: 10 00:05:24.405 EAL: Detected NUMA nodes: 1 00:05:24.405 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:24.405 EAL: Detected shared linkage of DPDK 00:05:24.405 EAL: No shared files mode enabled, IPC will be disabled 00:05:24.405 EAL: Selected IOVA mode 'PA' 00:05:24.405 EAL: Probing VFIO support... 00:05:24.405 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:24.405 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:24.405 EAL: Ask a virtual area of 0x2e000 bytes 00:05:24.405 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:24.405 EAL: Setting up physically contiguous memory... 00:05:24.405 EAL: Setting maximum number of open files to 524288 00:05:24.405 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:24.405 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:24.405 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.405 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:24.405 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.405 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.405 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:24.405 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:24.405 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.405 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:24.405 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.405 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.405 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:24.405 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:24.405 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.405 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:24.405 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.405 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.405 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:24.405 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:24.405 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.405 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:24.405 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.405 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.405 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:24.405 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:24.405 EAL: Hugepages will be freed exactly as allocated. 00:05:24.405 EAL: No shared files mode enabled, IPC is disabled 00:05:24.406 EAL: No shared files mode enabled, IPC is disabled 00:05:24.665 EAL: TSC frequency is ~2600000 KHz 00:05:24.665 EAL: Main lcore 0 is ready (tid=7fd17e6e9a40;cpuset=[0]) 00:05:24.665 EAL: Trying to obtain current memory policy. 00:05:24.665 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.665 EAL: Restoring previous memory policy: 0 00:05:24.665 EAL: request: mp_malloc_sync 00:05:24.665 EAL: No shared files mode enabled, IPC is disabled 00:05:24.665 EAL: Heap on socket 0 was expanded by 2MB 00:05:24.665 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:24.665 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:24.665 EAL: Mem event callback 'spdk:(nil)' registered 00:05:24.665 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:24.665 00:05:24.665 00:05:24.665 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.665 http://cunit.sourceforge.net/ 00:05:24.665 00:05:24.665 00:05:24.665 Suite: components_suite 00:05:24.924 Test: vtophys_malloc_test ...passed 00:05:24.924 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:24.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.924 EAL: Restoring previous memory policy: 4 00:05:24.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.924 EAL: request: mp_malloc_sync 00:05:24.924 EAL: No shared files mode enabled, IPC is disabled 00:05:24.924 EAL: Heap on socket 0 was expanded by 4MB 00:05:24.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.924 EAL: request: mp_malloc_sync 00:05:24.924 EAL: No shared files mode enabled, IPC is disabled 00:05:24.924 EAL: Heap on socket 0 was shrunk by 4MB 00:05:24.924 EAL: Trying to obtain current memory policy. 00:05:24.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.924 EAL: Restoring previous memory policy: 4 00:05:24.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.924 EAL: request: mp_malloc_sync 00:05:24.924 EAL: No shared files mode enabled, IPC is disabled 00:05:24.924 EAL: Heap on socket 0 was expanded by 6MB 00:05:24.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.924 EAL: request: mp_malloc_sync 00:05:24.924 EAL: No shared files mode enabled, IPC is disabled 00:05:24.924 EAL: Heap on socket 0 was shrunk by 6MB 00:05:24.924 EAL: Trying to obtain current memory policy. 00:05:24.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.924 EAL: Restoring previous memory policy: 4 00:05:24.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.924 EAL: request: mp_malloc_sync 00:05:24.924 EAL: No shared files mode enabled, IPC is disabled 00:05:24.924 EAL: Heap on socket 0 was expanded by 10MB 00:05:24.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.924 EAL: request: mp_malloc_sync 00:05:24.924 EAL: No shared files mode enabled, IPC is disabled 00:05:24.924 EAL: Heap on socket 0 was shrunk by 10MB 00:05:24.924 EAL: Trying to obtain current memory policy. 00:05:24.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.924 EAL: Restoring previous memory policy: 4 00:05:24.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.924 EAL: request: mp_malloc_sync 00:05:24.924 EAL: No shared files mode enabled, IPC is disabled 00:05:24.924 EAL: Heap on socket 0 was expanded by 18MB 00:05:24.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.924 EAL: request: mp_malloc_sync 00:05:24.924 EAL: No shared files mode enabled, IPC is disabled 00:05:24.924 EAL: Heap on socket 0 was shrunk by 18MB 00:05:24.924 EAL: Trying to obtain current memory policy. 00:05:24.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.924 EAL: Restoring previous memory policy: 4 00:05:24.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.924 EAL: request: mp_malloc_sync 00:05:24.924 EAL: No shared files mode enabled, IPC is disabled 00:05:24.924 EAL: Heap on socket 0 was expanded by 34MB 00:05:24.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.924 EAL: request: mp_malloc_sync 00:05:24.924 EAL: No shared files mode enabled, IPC is disabled 00:05:24.924 EAL: Heap on socket 0 was shrunk by 34MB 00:05:24.924 EAL: Trying to obtain current memory policy. 00:05:24.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.924 EAL: Restoring previous memory policy: 4 00:05:24.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.924 EAL: request: mp_malloc_sync 00:05:24.924 EAL: No shared files mode enabled, IPC is disabled 00:05:24.924 EAL: Heap on socket 0 was expanded by 66MB 00:05:24.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.924 EAL: request: mp_malloc_sync 00:05:24.924 EAL: No shared files mode enabled, IPC is disabled 00:05:24.924 EAL: Heap on socket 0 was shrunk by 66MB 00:05:25.184 EAL: Trying to obtain current memory policy. 00:05:25.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.184 EAL: Restoring previous memory policy: 4 00:05:25.184 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.184 EAL: request: mp_malloc_sync 00:05:25.184 EAL: No shared files mode enabled, IPC is disabled 00:05:25.184 EAL: Heap on socket 0 was expanded by 130MB 00:05:25.184 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.184 EAL: request: mp_malloc_sync 00:05:25.184 EAL: No shared files mode enabled, IPC is disabled 00:05:25.184 EAL: Heap on socket 0 was shrunk by 130MB 00:05:25.443 EAL: Trying to obtain current memory policy. 00:05:25.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.443 EAL: Restoring previous memory policy: 4 00:05:25.443 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.443 EAL: request: mp_malloc_sync 00:05:25.443 EAL: No shared files mode enabled, IPC is disabled 00:05:25.443 EAL: Heap on socket 0 was expanded by 258MB 00:05:25.443 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.701 EAL: request: mp_malloc_sync 00:05:25.701 EAL: No shared files mode enabled, IPC is disabled 00:05:25.701 EAL: Heap on socket 0 was shrunk by 258MB 00:05:25.701 EAL: Trying to obtain current memory policy. 00:05:25.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.960 EAL: Restoring previous memory policy: 4 00:05:25.960 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.960 EAL: request: mp_malloc_sync 00:05:25.960 EAL: No shared files mode enabled, IPC is disabled 00:05:25.960 EAL: Heap on socket 0 was expanded by 514MB 00:05:26.217 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.475 EAL: request: mp_malloc_sync 00:05:26.475 EAL: No shared files mode enabled, IPC is disabled 00:05:26.475 EAL: Heap on socket 0 was shrunk by 514MB 00:05:26.732 EAL: Trying to obtain current memory policy. 00:05:26.732 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.990 EAL: Restoring previous memory policy: 4 00:05:26.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.990 EAL: request: mp_malloc_sync 00:05:26.990 EAL: No shared files mode enabled, IPC is disabled 00:05:26.990 EAL: Heap on socket 0 was expanded by 1026MB 00:05:27.919 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.919 EAL: request: mp_malloc_sync 00:05:27.919 EAL: No shared files mode enabled, IPC is disabled 00:05:27.919 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:28.858 passed 00:05:28.858 00:05:28.858 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.858 suites 1 1 n/a 0 0 00:05:28.858 tests 2 2 2 0 0 00:05:28.858 asserts 5838 5838 5838 0 n/a 00:05:28.858 00:05:28.858 Elapsed time = 4.042 seconds 00:05:28.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.858 EAL: request: mp_malloc_sync 00:05:28.858 EAL: No shared files mode enabled, IPC is disabled 00:05:28.858 EAL: Heap on socket 0 was shrunk by 2MB 00:05:28.858 EAL: No shared files mode enabled, IPC is disabled 00:05:28.858 EAL: No shared files mode enabled, IPC is disabled 00:05:28.858 EAL: No shared files mode enabled, IPC is disabled 00:05:28.858 00:05:28.858 real 0m4.284s 00:05:28.858 user 0m3.582s 00:05:28.858 sys 0m0.565s 00:05:28.858 03:56:16 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.858 ************************************ 00:05:28.858 END TEST env_vtophys 00:05:28.858 ************************************ 00:05:28.858 03:56:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:28.858 03:56:16 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:28.858 03:56:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.858 03:56:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.858 03:56:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.858 ************************************ 00:05:28.858 START TEST env_pci 00:05:28.858 ************************************ 00:05:28.858 03:56:16 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:28.858 00:05:28.858 00:05:28.858 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.858 http://cunit.sourceforge.net/ 00:05:28.858 00:05:28.858 00:05:28.858 Suite: pci 00:05:28.858 Test: pci_hook ...[2024-12-06 03:56:16.152389] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57021 has claimed it 00:05:28.858 passed 00:05:28.858 00:05:28.858 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.858 suites 1 1 n/a 0 0 00:05:28.858 tests 1 1 1 0 0 00:05:28.858 asserts 25 25 25 0 n/a 00:05:28.858 00:05:28.858 Elapsed time = 0.005 seconds 00:05:28.858 EAL: Cannot find device (10000:00:01.0) 00:05:28.858 EAL: Failed to attach device on primary process 00:05:28.858 ************************************ 00:05:28.858 END TEST env_pci 00:05:28.858 ************************************ 00:05:28.858 00:05:28.858 real 0m0.054s 00:05:28.858 user 0m0.023s 00:05:28.858 sys 0m0.030s 00:05:28.858 03:56:16 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.858 03:56:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:28.858 03:56:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:28.858 03:56:16 env -- env/env.sh@15 -- # uname 00:05:28.858 03:56:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:28.858 03:56:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:28.858 03:56:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.858 03:56:16 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:28.858 03:56:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.858 03:56:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.858 ************************************ 00:05:28.858 START TEST env_dpdk_post_init 00:05:28.858 ************************************ 00:05:28.858 03:56:16 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.858 EAL: Detected CPU lcores: 10 00:05:28.858 EAL: Detected NUMA nodes: 1 00:05:28.858 EAL: Detected shared linkage of DPDK 00:05:28.858 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.858 EAL: Selected IOVA mode 'PA' 00:05:29.115 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.115 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:29.115 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:29.115 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:29.115 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:29.115 Starting DPDK initialization... 00:05:29.115 Starting SPDK post initialization... 00:05:29.115 SPDK NVMe probe 00:05:29.115 Attaching to 0000:00:10.0 00:05:29.115 Attaching to 0000:00:11.0 00:05:29.115 Attaching to 0000:00:12.0 00:05:29.115 Attaching to 0000:00:13.0 00:05:29.115 Attached to 0000:00:13.0 00:05:29.115 Attached to 0000:00:10.0 00:05:29.115 Attached to 0000:00:11.0 00:05:29.115 Attached to 0000:00:12.0 00:05:29.115 Cleaning up... 00:05:29.115 ************************************ 00:05:29.115 00:05:29.115 real 0m0.251s 00:05:29.115 user 0m0.080s 00:05:29.115 sys 0m0.072s 00:05:29.115 03:56:16 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.115 03:56:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.115 END TEST env_dpdk_post_init 00:05:29.115 ************************************ 00:05:29.115 03:56:16 env -- env/env.sh@26 -- # uname 00:05:29.115 03:56:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:29.115 03:56:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.115 03:56:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.115 03:56:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.115 03:56:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.115 ************************************ 00:05:29.115 START TEST env_mem_callbacks 00:05:29.115 ************************************ 00:05:29.115 03:56:16 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.115 EAL: Detected CPU lcores: 10 00:05:29.115 EAL: Detected NUMA nodes: 1 00:05:29.115 EAL: Detected shared linkage of DPDK 00:05:29.115 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.115 EAL: Selected IOVA mode 'PA' 00:05:29.374 00:05:29.374 00:05:29.374 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.374 http://cunit.sourceforge.net/ 00:05:29.374 00:05:29.374 00:05:29.374 Suite: memory 00:05:29.374 Test: test ... 00:05:29.374 register 0x200000200000 2097152 00:05:29.374 malloc 3145728 00:05:29.374 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.374 register 0x200000400000 4194304 00:05:29.374 buf 0x2000004fffc0 len 3145728 PASSED 00:05:29.374 malloc 64 00:05:29.374 buf 0x2000004ffec0 len 64 PASSED 00:05:29.374 malloc 4194304 00:05:29.374 register 0x200000800000 6291456 00:05:29.374 buf 0x2000009fffc0 len 4194304 PASSED 00:05:29.374 free 0x2000004fffc0 3145728 00:05:29.374 free 0x2000004ffec0 64 00:05:29.374 unregister 0x200000400000 4194304 PASSED 00:05:29.374 free 0x2000009fffc0 4194304 00:05:29.374 unregister 0x200000800000 6291456 PASSED 00:05:29.374 malloc 8388608 00:05:29.374 register 0x200000400000 10485760 00:05:29.374 buf 0x2000005fffc0 len 8388608 PASSED 00:05:29.374 free 0x2000005fffc0 8388608 00:05:29.374 unregister 0x200000400000 10485760 PASSED 00:05:29.374 passed 00:05:29.374 00:05:29.374 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.374 suites 1 1 n/a 0 0 00:05:29.374 tests 1 1 1 0 0 00:05:29.374 asserts 15 15 15 0 n/a 00:05:29.374 00:05:29.374 Elapsed time = 0.041 seconds 00:05:29.374 00:05:29.374 real 0m0.209s 00:05:29.374 user 0m0.057s 00:05:29.374 sys 0m0.048s 00:05:29.374 03:56:16 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.374 03:56:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:29.374 ************************************ 00:05:29.374 END TEST env_mem_callbacks 00:05:29.374 ************************************ 00:05:29.374 00:05:29.374 real 0m5.415s 00:05:29.374 user 0m4.142s 00:05:29.374 sys 0m0.921s 00:05:29.374 03:56:16 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.374 03:56:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.374 ************************************ 00:05:29.374 END TEST env 00:05:29.374 ************************************ 00:05:29.374 03:56:16 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:29.374 03:56:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.374 03:56:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.374 03:56:16 -- common/autotest_common.sh@10 -- # set +x 00:05:29.374 ************************************ 00:05:29.374 START TEST rpc 00:05:29.374 ************************************ 00:05:29.374 03:56:16 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:29.374 * Looking for test storage... 00:05:29.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:29.374 03:56:16 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.374 03:56:16 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.374 03:56:16 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.635 03:56:16 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.635 03:56:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.635 03:56:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.635 03:56:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.635 03:56:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.635 03:56:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.635 03:56:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.635 03:56:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.635 03:56:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.635 03:56:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.635 03:56:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.635 03:56:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.635 03:56:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:29.635 03:56:16 rpc -- scripts/common.sh@345 -- # : 1 00:05:29.635 03:56:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.635 03:56:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.635 03:56:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:29.635 03:56:16 rpc -- scripts/common.sh@353 -- # local d=1 00:05:29.635 03:56:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.635 03:56:16 rpc -- scripts/common.sh@355 -- # echo 1 00:05:29.635 03:56:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.635 03:56:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:29.635 03:56:16 rpc -- scripts/common.sh@353 -- # local d=2 00:05:29.635 03:56:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.635 03:56:16 rpc -- scripts/common.sh@355 -- # echo 2 00:05:29.635 03:56:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.635 03:56:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.635 03:56:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.635 03:56:16 rpc -- scripts/common.sh@368 -- # return 0 00:05:29.635 03:56:16 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.635 03:56:16 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.635 --rc genhtml_branch_coverage=1 00:05:29.635 --rc genhtml_function_coverage=1 00:05:29.635 --rc genhtml_legend=1 00:05:29.635 --rc geninfo_all_blocks=1 00:05:29.635 --rc geninfo_unexecuted_blocks=1 00:05:29.635 00:05:29.635 ' 00:05:29.635 03:56:16 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.635 --rc genhtml_branch_coverage=1 00:05:29.635 --rc genhtml_function_coverage=1 00:05:29.635 --rc genhtml_legend=1 00:05:29.635 --rc geninfo_all_blocks=1 00:05:29.635 --rc geninfo_unexecuted_blocks=1 00:05:29.635 00:05:29.635 ' 00:05:29.635 03:56:16 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.635 --rc genhtml_branch_coverage=1 00:05:29.635 --rc genhtml_function_coverage=1 00:05:29.635 --rc genhtml_legend=1 00:05:29.635 --rc geninfo_all_blocks=1 00:05:29.635 --rc geninfo_unexecuted_blocks=1 00:05:29.635 00:05:29.635 ' 00:05:29.635 03:56:16 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.635 --rc genhtml_branch_coverage=1 00:05:29.635 --rc genhtml_function_coverage=1 00:05:29.635 --rc genhtml_legend=1 00:05:29.635 --rc geninfo_all_blocks=1 00:05:29.635 --rc geninfo_unexecuted_blocks=1 00:05:29.635 00:05:29.635 ' 00:05:29.635 03:56:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57148 00:05:29.635 03:56:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.635 03:56:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57148 00:05:29.635 03:56:16 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:29.635 03:56:16 rpc -- common/autotest_common.sh@835 -- # '[' -z 57148 ']' 00:05:29.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.635 03:56:16 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.635 03:56:16 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.635 03:56:16 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.635 03:56:16 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.635 03:56:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.635 [2024-12-06 03:56:17.008842] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:05:29.635 [2024-12-06 03:56:17.008957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57148 ] 00:05:29.895 [2024-12-06 03:56:17.163780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.895 [2024-12-06 03:56:17.247126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:29.895 [2024-12-06 03:56:17.247176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57148' to capture a snapshot of events at runtime. 00:05:29.895 [2024-12-06 03:56:17.247184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.895 [2024-12-06 03:56:17.247192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.895 [2024-12-06 03:56:17.247197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57148 for offline analysis/debug. 00:05:29.895 [2024-12-06 03:56:17.247888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.461 03:56:17 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.461 03:56:17 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.461 03:56:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.461 03:56:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.461 03:56:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.462 03:56:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.462 03:56:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.462 03:56:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.462 03:56:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.462 ************************************ 00:05:30.462 START TEST rpc_integrity 00:05:30.462 ************************************ 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.462 { 00:05:30.462 "name": "Malloc0", 00:05:30.462 "aliases": [ 00:05:30.462 "3206f0b8-d698-46f8-86aa-377f1c60fcd7" 00:05:30.462 ], 00:05:30.462 "product_name": "Malloc disk", 00:05:30.462 "block_size": 512, 00:05:30.462 "num_blocks": 16384, 00:05:30.462 "uuid": "3206f0b8-d698-46f8-86aa-377f1c60fcd7", 00:05:30.462 "assigned_rate_limits": { 00:05:30.462 "rw_ios_per_sec": 0, 00:05:30.462 "rw_mbytes_per_sec": 0, 00:05:30.462 "r_mbytes_per_sec": 0, 00:05:30.462 "w_mbytes_per_sec": 0 00:05:30.462 }, 00:05:30.462 "claimed": false, 00:05:30.462 "zoned": false, 00:05:30.462 "supported_io_types": { 00:05:30.462 "read": true, 00:05:30.462 "write": true, 00:05:30.462 "unmap": true, 00:05:30.462 "flush": true, 00:05:30.462 "reset": true, 00:05:30.462 "nvme_admin": false, 00:05:30.462 "nvme_io": false, 00:05:30.462 "nvme_io_md": false, 00:05:30.462 "write_zeroes": true, 00:05:30.462 "zcopy": true, 00:05:30.462 "get_zone_info": false, 00:05:30.462 "zone_management": false, 00:05:30.462 "zone_append": false, 00:05:30.462 "compare": false, 00:05:30.462 "compare_and_write": false, 00:05:30.462 "abort": true, 00:05:30.462 "seek_hole": false, 00:05:30.462 "seek_data": false, 00:05:30.462 "copy": true, 00:05:30.462 "nvme_iov_md": false 00:05:30.462 }, 00:05:30.462 "memory_domains": [ 00:05:30.462 { 00:05:30.462 "dma_device_id": "system", 00:05:30.462 "dma_device_type": 1 00:05:30.462 }, 00:05:30.462 { 00:05:30.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.462 "dma_device_type": 2 00:05:30.462 } 00:05:30.462 ], 00:05:30.462 "driver_specific": {} 00:05:30.462 } 00:05:30.462 ]' 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.462 [2024-12-06 03:56:17.956645] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.462 [2024-12-06 03:56:17.956700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.462 [2024-12-06 03:56:17.956729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:30.462 [2024-12-06 03:56:17.956739] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.462 [2024-12-06 03:56:17.958566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.462 [2024-12-06 03:56:17.958605] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.462 Passthru0 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.462 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.462 { 00:05:30.462 "name": "Malloc0", 00:05:30.462 "aliases": [ 00:05:30.462 "3206f0b8-d698-46f8-86aa-377f1c60fcd7" 00:05:30.462 ], 00:05:30.462 "product_name": "Malloc disk", 00:05:30.462 "block_size": 512, 00:05:30.462 "num_blocks": 16384, 00:05:30.462 "uuid": "3206f0b8-d698-46f8-86aa-377f1c60fcd7", 00:05:30.462 "assigned_rate_limits": { 00:05:30.462 "rw_ios_per_sec": 0, 00:05:30.462 "rw_mbytes_per_sec": 0, 00:05:30.462 "r_mbytes_per_sec": 0, 00:05:30.462 "w_mbytes_per_sec": 0 00:05:30.462 }, 00:05:30.462 "claimed": true, 00:05:30.462 "claim_type": "exclusive_write", 00:05:30.462 "zoned": false, 00:05:30.462 "supported_io_types": { 00:05:30.462 "read": true, 00:05:30.462 "write": true, 00:05:30.462 "unmap": true, 00:05:30.462 "flush": true, 00:05:30.462 "reset": true, 00:05:30.462 "nvme_admin": false, 00:05:30.462 "nvme_io": false, 00:05:30.462 "nvme_io_md": false, 00:05:30.462 "write_zeroes": true, 00:05:30.462 "zcopy": true, 00:05:30.462 "get_zone_info": false, 00:05:30.462 "zone_management": false, 00:05:30.462 "zone_append": false, 00:05:30.462 "compare": false, 00:05:30.462 "compare_and_write": false, 00:05:30.462 "abort": true, 00:05:30.462 "seek_hole": false, 00:05:30.462 "seek_data": false, 00:05:30.462 "copy": true, 00:05:30.462 "nvme_iov_md": false 00:05:30.462 }, 00:05:30.462 "memory_domains": [ 00:05:30.462 { 00:05:30.462 "dma_device_id": "system", 00:05:30.462 "dma_device_type": 1 00:05:30.462 }, 00:05:30.462 { 00:05:30.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.462 "dma_device_type": 2 00:05:30.462 } 00:05:30.462 ], 00:05:30.462 "driver_specific": {} 00:05:30.462 }, 00:05:30.462 { 00:05:30.462 "name": "Passthru0", 00:05:30.462 "aliases": [ 00:05:30.462 "1ab15cc3-0e52-588f-a1a9-c6062fe03ed0" 00:05:30.462 ], 00:05:30.462 "product_name": "passthru", 00:05:30.462 "block_size": 512, 00:05:30.462 "num_blocks": 16384, 00:05:30.462 "uuid": "1ab15cc3-0e52-588f-a1a9-c6062fe03ed0", 00:05:30.462 "assigned_rate_limits": { 00:05:30.462 "rw_ios_per_sec": 0, 00:05:30.462 "rw_mbytes_per_sec": 0, 00:05:30.462 "r_mbytes_per_sec": 0, 00:05:30.462 "w_mbytes_per_sec": 0 00:05:30.462 }, 00:05:30.462 "claimed": false, 00:05:30.462 "zoned": false, 00:05:30.462 "supported_io_types": { 00:05:30.462 "read": true, 00:05:30.462 "write": true, 00:05:30.462 "unmap": true, 00:05:30.462 "flush": true, 00:05:30.462 "reset": true, 00:05:30.462 "nvme_admin": false, 00:05:30.462 "nvme_io": false, 00:05:30.462 "nvme_io_md": false, 00:05:30.462 "write_zeroes": true, 00:05:30.462 "zcopy": true, 00:05:30.462 "get_zone_info": false, 00:05:30.462 "zone_management": false, 00:05:30.462 "zone_append": false, 00:05:30.462 "compare": false, 00:05:30.462 "compare_and_write": false, 00:05:30.462 "abort": true, 00:05:30.462 "seek_hole": false, 00:05:30.462 "seek_data": false, 00:05:30.462 "copy": true, 00:05:30.462 "nvme_iov_md": false 00:05:30.462 }, 00:05:30.462 "memory_domains": [ 00:05:30.462 { 00:05:30.462 "dma_device_id": "system", 00:05:30.462 "dma_device_type": 1 00:05:30.462 }, 00:05:30.462 { 00:05:30.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.462 "dma_device_type": 2 00:05:30.462 } 00:05:30.462 ], 00:05:30.462 "driver_specific": { 00:05:30.462 "passthru": { 00:05:30.462 "name": "Passthru0", 00:05:30.462 "base_bdev_name": "Malloc0" 00:05:30.462 } 00:05:30.462 } 00:05:30.462 } 00:05:30.462 ]' 00:05:30.462 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.723 03:56:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.723 03:56:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.723 03:56:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.723 03:56:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.723 03:56:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.723 03:56:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.723 03:56:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.723 03:56:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.723 03:56:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.723 03:56:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.723 03:56:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.723 03:56:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.723 03:56:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.723 03:56:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.723 03:56:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.723 03:56:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.723 00:05:30.723 real 0m0.237s 00:05:30.723 user 0m0.136s 00:05:30.723 sys 0m0.024s 00:05:30.723 ************************************ 00:05:30.723 03:56:18 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.723 03:56:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.723 END TEST rpc_integrity 00:05:30.723 ************************************ 00:05:30.723 03:56:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:30.723 03:56:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.723 03:56:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.723 03:56:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.723 ************************************ 00:05:30.723 START TEST rpc_plugins 00:05:30.723 ************************************ 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:30.723 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.723 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:30.723 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.723 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:30.723 { 00:05:30.723 "name": "Malloc1", 00:05:30.723 "aliases": [ 00:05:30.723 "38c1473a-46e0-4538-a678-5ec81a238bd1" 00:05:30.723 ], 00:05:30.723 "product_name": "Malloc disk", 00:05:30.723 "block_size": 4096, 00:05:30.723 "num_blocks": 256, 00:05:30.723 "uuid": "38c1473a-46e0-4538-a678-5ec81a238bd1", 00:05:30.723 "assigned_rate_limits": { 00:05:30.723 "rw_ios_per_sec": 0, 00:05:30.723 "rw_mbytes_per_sec": 0, 00:05:30.723 "r_mbytes_per_sec": 0, 00:05:30.723 "w_mbytes_per_sec": 0 00:05:30.723 }, 00:05:30.723 "claimed": false, 00:05:30.723 "zoned": false, 00:05:30.723 "supported_io_types": { 00:05:30.723 "read": true, 00:05:30.723 "write": true, 00:05:30.723 "unmap": true, 00:05:30.723 "flush": true, 00:05:30.723 "reset": true, 00:05:30.723 "nvme_admin": false, 00:05:30.723 "nvme_io": false, 00:05:30.723 "nvme_io_md": false, 00:05:30.723 "write_zeroes": true, 00:05:30.723 "zcopy": true, 00:05:30.723 "get_zone_info": false, 00:05:30.723 "zone_management": false, 00:05:30.723 "zone_append": false, 00:05:30.723 "compare": false, 00:05:30.723 "compare_and_write": false, 00:05:30.723 "abort": true, 00:05:30.723 "seek_hole": false, 00:05:30.723 "seek_data": false, 00:05:30.723 "copy": true, 00:05:30.723 "nvme_iov_md": false 00:05:30.723 }, 00:05:30.723 "memory_domains": [ 00:05:30.723 { 00:05:30.723 "dma_device_id": "system", 00:05:30.723 "dma_device_type": 1 00:05:30.723 }, 00:05:30.723 { 00:05:30.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.723 "dma_device_type": 2 00:05:30.723 } 00:05:30.723 ], 00:05:30.723 "driver_specific": {} 00:05:30.723 } 00:05:30.723 ]' 00:05:30.723 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:30.723 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:30.723 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.723 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.723 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:30.723 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:30.723 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:30.723 00:05:30.723 real 0m0.102s 00:05:30.723 user 0m0.059s 00:05:30.723 sys 0m0.012s 00:05:30.723 ************************************ 00:05:30.723 END TEST rpc_plugins 00:05:30.723 ************************************ 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.723 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.983 03:56:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:30.983 03:56:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.983 03:56:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.983 03:56:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.983 ************************************ 00:05:30.983 START TEST rpc_trace_cmd_test 00:05:30.983 ************************************ 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:30.983 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57148", 00:05:30.983 "tpoint_group_mask": "0x8", 00:05:30.983 "iscsi_conn": { 00:05:30.983 "mask": "0x2", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "scsi": { 00:05:30.983 "mask": "0x4", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "bdev": { 00:05:30.983 "mask": "0x8", 00:05:30.983 "tpoint_mask": "0xffffffffffffffff" 00:05:30.983 }, 00:05:30.983 "nvmf_rdma": { 00:05:30.983 "mask": "0x10", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "nvmf_tcp": { 00:05:30.983 "mask": "0x20", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "ftl": { 00:05:30.983 "mask": "0x40", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "blobfs": { 00:05:30.983 "mask": "0x80", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "dsa": { 00:05:30.983 "mask": "0x200", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "thread": { 00:05:30.983 "mask": "0x400", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "nvme_pcie": { 00:05:30.983 "mask": "0x800", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "iaa": { 00:05:30.983 "mask": "0x1000", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "nvme_tcp": { 00:05:30.983 "mask": "0x2000", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "bdev_nvme": { 00:05:30.983 "mask": "0x4000", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "sock": { 00:05:30.983 "mask": "0x8000", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "blob": { 00:05:30.983 "mask": "0x10000", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "bdev_raid": { 00:05:30.983 "mask": "0x20000", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 }, 00:05:30.983 "scheduler": { 00:05:30.983 "mask": "0x40000", 00:05:30.983 "tpoint_mask": "0x0" 00:05:30.983 } 00:05:30.983 }' 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.983 00:05:30.983 real 0m0.162s 00:05:30.983 user 0m0.141s 00:05:30.983 sys 0m0.013s 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.983 ************************************ 00:05:30.983 END TEST rpc_trace_cmd_test 00:05:30.983 ************************************ 00:05:30.983 03:56:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.983 03:56:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.983 03:56:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.983 03:56:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.983 03:56:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.983 03:56:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.983 03:56:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.983 ************************************ 00:05:30.983 START TEST rpc_daemon_integrity 00:05:30.983 ************************************ 00:05:30.983 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:30.983 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.983 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.983 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.983 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.983 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.983 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.983 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.983 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.983 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.983 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.244 { 00:05:31.244 "name": "Malloc2", 00:05:31.244 "aliases": [ 00:05:31.244 "1dbbd10b-150a-4ff4-b704-fd13928e72a3" 00:05:31.244 ], 00:05:31.244 "product_name": "Malloc disk", 00:05:31.244 "block_size": 512, 00:05:31.244 "num_blocks": 16384, 00:05:31.244 "uuid": "1dbbd10b-150a-4ff4-b704-fd13928e72a3", 00:05:31.244 "assigned_rate_limits": { 00:05:31.244 "rw_ios_per_sec": 0, 00:05:31.244 "rw_mbytes_per_sec": 0, 00:05:31.244 "r_mbytes_per_sec": 0, 00:05:31.244 "w_mbytes_per_sec": 0 00:05:31.244 }, 00:05:31.244 "claimed": false, 00:05:31.244 "zoned": false, 00:05:31.244 "supported_io_types": { 00:05:31.244 "read": true, 00:05:31.244 "write": true, 00:05:31.244 "unmap": true, 00:05:31.244 "flush": true, 00:05:31.244 "reset": true, 00:05:31.244 "nvme_admin": false, 00:05:31.244 "nvme_io": false, 00:05:31.244 "nvme_io_md": false, 00:05:31.244 "write_zeroes": true, 00:05:31.244 "zcopy": true, 00:05:31.244 "get_zone_info": false, 00:05:31.244 "zone_management": false, 00:05:31.244 "zone_append": false, 00:05:31.244 "compare": false, 00:05:31.244 "compare_and_write": false, 00:05:31.244 "abort": true, 00:05:31.244 "seek_hole": false, 00:05:31.244 "seek_data": false, 00:05:31.244 "copy": true, 00:05:31.244 "nvme_iov_md": false 00:05:31.244 }, 00:05:31.244 "memory_domains": [ 00:05:31.244 { 00:05:31.244 "dma_device_id": "system", 00:05:31.244 "dma_device_type": 1 00:05:31.244 }, 00:05:31.244 { 00:05:31.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.244 "dma_device_type": 2 00:05:31.244 } 00:05:31.244 ], 00:05:31.244 "driver_specific": {} 00:05:31.244 } 00:05:31.244 ]' 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.244 [2024-12-06 03:56:18.561991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:31.244 [2024-12-06 03:56:18.562039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.244 [2024-12-06 03:56:18.562056] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:31.244 [2024-12-06 03:56:18.562066] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.244 [2024-12-06 03:56:18.563870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.244 [2024-12-06 03:56:18.563903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.244 Passthru0 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.244 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.245 { 00:05:31.245 "name": "Malloc2", 00:05:31.245 "aliases": [ 00:05:31.245 "1dbbd10b-150a-4ff4-b704-fd13928e72a3" 00:05:31.245 ], 00:05:31.245 "product_name": "Malloc disk", 00:05:31.245 "block_size": 512, 00:05:31.245 "num_blocks": 16384, 00:05:31.245 "uuid": "1dbbd10b-150a-4ff4-b704-fd13928e72a3", 00:05:31.245 "assigned_rate_limits": { 00:05:31.245 "rw_ios_per_sec": 0, 00:05:31.245 "rw_mbytes_per_sec": 0, 00:05:31.245 "r_mbytes_per_sec": 0, 00:05:31.245 "w_mbytes_per_sec": 0 00:05:31.245 }, 00:05:31.245 "claimed": true, 00:05:31.245 "claim_type": "exclusive_write", 00:05:31.245 "zoned": false, 00:05:31.245 "supported_io_types": { 00:05:31.245 "read": true, 00:05:31.245 "write": true, 00:05:31.245 "unmap": true, 00:05:31.245 "flush": true, 00:05:31.245 "reset": true, 00:05:31.245 "nvme_admin": false, 00:05:31.245 "nvme_io": false, 00:05:31.245 "nvme_io_md": false, 00:05:31.245 "write_zeroes": true, 00:05:31.245 "zcopy": true, 00:05:31.245 "get_zone_info": false, 00:05:31.245 "zone_management": false, 00:05:31.245 "zone_append": false, 00:05:31.245 "compare": false, 00:05:31.245 "compare_and_write": false, 00:05:31.245 "abort": true, 00:05:31.245 "seek_hole": false, 00:05:31.245 "seek_data": false, 00:05:31.245 "copy": true, 00:05:31.245 "nvme_iov_md": false 00:05:31.245 }, 00:05:31.245 "memory_domains": [ 00:05:31.245 { 00:05:31.245 "dma_device_id": "system", 00:05:31.245 "dma_device_type": 1 00:05:31.245 }, 00:05:31.245 { 00:05:31.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.245 "dma_device_type": 2 00:05:31.245 } 00:05:31.245 ], 00:05:31.245 "driver_specific": {} 00:05:31.245 }, 00:05:31.245 { 00:05:31.245 "name": "Passthru0", 00:05:31.245 "aliases": [ 00:05:31.245 "cbe65932-b4f6-507b-93e9-9854e107f617" 00:05:31.245 ], 00:05:31.245 "product_name": "passthru", 00:05:31.245 "block_size": 512, 00:05:31.245 "num_blocks": 16384, 00:05:31.245 "uuid": "cbe65932-b4f6-507b-93e9-9854e107f617", 00:05:31.245 "assigned_rate_limits": { 00:05:31.245 "rw_ios_per_sec": 0, 00:05:31.245 "rw_mbytes_per_sec": 0, 00:05:31.245 "r_mbytes_per_sec": 0, 00:05:31.245 "w_mbytes_per_sec": 0 00:05:31.245 }, 00:05:31.245 "claimed": false, 00:05:31.245 "zoned": false, 00:05:31.245 "supported_io_types": { 00:05:31.245 "read": true, 00:05:31.245 "write": true, 00:05:31.245 "unmap": true, 00:05:31.245 "flush": true, 00:05:31.245 "reset": true, 00:05:31.245 "nvme_admin": false, 00:05:31.245 "nvme_io": false, 00:05:31.245 "nvme_io_md": false, 00:05:31.245 "write_zeroes": true, 00:05:31.245 "zcopy": true, 00:05:31.245 "get_zone_info": false, 00:05:31.245 "zone_management": false, 00:05:31.245 "zone_append": false, 00:05:31.245 "compare": false, 00:05:31.245 "compare_and_write": false, 00:05:31.245 "abort": true, 00:05:31.245 "seek_hole": false, 00:05:31.245 "seek_data": false, 00:05:31.245 "copy": true, 00:05:31.245 "nvme_iov_md": false 00:05:31.245 }, 00:05:31.245 "memory_domains": [ 00:05:31.245 { 00:05:31.245 "dma_device_id": "system", 00:05:31.245 "dma_device_type": 1 00:05:31.245 }, 00:05:31.245 { 00:05:31.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.245 "dma_device_type": 2 00:05:31.245 } 00:05:31.245 ], 00:05:31.245 "driver_specific": { 00:05:31.245 "passthru": { 00:05:31.245 "name": "Passthru0", 00:05:31.245 "base_bdev_name": "Malloc2" 00:05:31.245 } 00:05:31.245 } 00:05:31.245 } 00:05:31.245 ]' 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.245 00:05:31.245 real 0m0.212s 00:05:31.245 user 0m0.120s 00:05:31.245 sys 0m0.028s 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.245 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.245 ************************************ 00:05:31.245 END TEST rpc_daemon_integrity 00:05:31.245 ************************************ 00:05:31.245 03:56:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:31.245 03:56:18 rpc -- rpc/rpc.sh@84 -- # killprocess 57148 00:05:31.245 03:56:18 rpc -- common/autotest_common.sh@954 -- # '[' -z 57148 ']' 00:05:31.245 03:56:18 rpc -- common/autotest_common.sh@958 -- # kill -0 57148 00:05:31.245 03:56:18 rpc -- common/autotest_common.sh@959 -- # uname 00:05:31.245 03:56:18 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.245 03:56:18 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57148 00:05:31.245 03:56:18 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.245 03:56:18 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.245 killing process with pid 57148 00:05:31.245 03:56:18 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57148' 00:05:31.245 03:56:18 rpc -- common/autotest_common.sh@973 -- # kill 57148 00:05:31.245 03:56:18 rpc -- common/autotest_common.sh@978 -- # wait 57148 00:05:32.630 00:05:32.630 real 0m3.119s 00:05:32.630 user 0m3.550s 00:05:32.630 sys 0m0.549s 00:05:32.630 03:56:19 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.630 ************************************ 00:05:32.630 END TEST rpc 00:05:32.630 ************************************ 00:05:32.630 03:56:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.630 03:56:19 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:32.630 03:56:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.630 03:56:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.630 03:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:32.630 ************************************ 00:05:32.630 START TEST skip_rpc 00:05:32.631 ************************************ 00:05:32.631 03:56:19 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:32.631 * Looking for test storage... 00:05:32.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.631 03:56:20 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.631 03:56:20 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.631 03:56:20 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.631 03:56:20 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.631 03:56:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:32.631 03:56:20 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.631 03:56:20 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.631 --rc genhtml_branch_coverage=1 00:05:32.631 --rc genhtml_function_coverage=1 00:05:32.631 --rc genhtml_legend=1 00:05:32.631 --rc geninfo_all_blocks=1 00:05:32.631 --rc geninfo_unexecuted_blocks=1 00:05:32.631 00:05:32.631 ' 00:05:32.631 03:56:20 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.631 --rc genhtml_branch_coverage=1 00:05:32.631 --rc genhtml_function_coverage=1 00:05:32.631 --rc genhtml_legend=1 00:05:32.631 --rc geninfo_all_blocks=1 00:05:32.631 --rc geninfo_unexecuted_blocks=1 00:05:32.631 00:05:32.631 ' 00:05:32.631 03:56:20 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.631 --rc genhtml_branch_coverage=1 00:05:32.631 --rc genhtml_function_coverage=1 00:05:32.631 --rc genhtml_legend=1 00:05:32.631 --rc geninfo_all_blocks=1 00:05:32.631 --rc geninfo_unexecuted_blocks=1 00:05:32.631 00:05:32.631 ' 00:05:32.631 03:56:20 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.631 --rc genhtml_branch_coverage=1 00:05:32.631 --rc genhtml_function_coverage=1 00:05:32.631 --rc genhtml_legend=1 00:05:32.631 --rc geninfo_all_blocks=1 00:05:32.631 --rc geninfo_unexecuted_blocks=1 00:05:32.631 00:05:32.631 ' 00:05:32.631 03:56:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:32.631 03:56:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:32.631 03:56:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:32.631 03:56:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.631 03:56:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.631 03:56:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.631 ************************************ 00:05:32.631 START TEST skip_rpc 00:05:32.631 ************************************ 00:05:32.631 03:56:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:32.631 03:56:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57355 00:05:32.631 03:56:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.631 03:56:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:32.631 03:56:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:33.002 [2024-12-06 03:56:20.237636] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:05:33.002 [2024-12-06 03:56:20.237778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57355 ] 00:05:33.002 [2024-12-06 03:56:20.395026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.325 [2024-12-06 03:56:20.480453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57355 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57355 ']' 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57355 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57355 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.611 killing process with pid 57355 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57355' 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57355 00:05:38.611 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57355 00:05:39.182 00:05:39.182 real 0m6.255s 00:05:39.182 user 0m5.880s 00:05:39.182 sys 0m0.262s 00:05:39.182 03:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.182 03:56:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.182 ************************************ 00:05:39.182 END TEST skip_rpc 00:05:39.182 ************************************ 00:05:39.182 03:56:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:39.182 03:56:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.182 03:56:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.182 03:56:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.182 ************************************ 00:05:39.182 START TEST skip_rpc_with_json 00:05:39.182 ************************************ 00:05:39.182 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:39.182 03:56:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:39.182 03:56:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57448 00:05:39.182 03:56:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.182 03:56:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57448 00:05:39.182 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57448 ']' 00:05:39.182 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.182 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.182 03:56:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.182 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.182 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.182 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.182 [2024-12-06 03:56:26.545600] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:05:39.182 [2024-12-06 03:56:26.545738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57448 ] 00:05:39.182 [2024-12-06 03:56:26.705488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.444 [2024-12-06 03:56:26.809731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.135 [2024-12-06 03:56:27.438903] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:40.135 request: 00:05:40.135 { 00:05:40.135 "trtype": "tcp", 00:05:40.135 "method": "nvmf_get_transports", 00:05:40.135 "req_id": 1 00:05:40.135 } 00:05:40.135 Got JSON-RPC error response 00:05:40.135 response: 00:05:40.135 { 00:05:40.135 "code": -19, 00:05:40.135 "message": "No such device" 00:05:40.135 } 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.135 [2024-12-06 03:56:27.447016] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.135 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.394 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.394 03:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.394 { 00:05:40.394 "subsystems": [ 00:05:40.394 { 00:05:40.394 "subsystem": "fsdev", 00:05:40.394 "config": [ 00:05:40.394 { 00:05:40.394 "method": "fsdev_set_opts", 00:05:40.394 "params": { 00:05:40.394 "fsdev_io_pool_size": 65535, 00:05:40.394 "fsdev_io_cache_size": 256 00:05:40.394 } 00:05:40.394 } 00:05:40.394 ] 00:05:40.394 }, 00:05:40.394 { 00:05:40.394 "subsystem": "keyring", 00:05:40.394 "config": [] 00:05:40.394 }, 00:05:40.394 { 00:05:40.394 "subsystem": "iobuf", 00:05:40.394 "config": [ 00:05:40.394 { 00:05:40.394 "method": "iobuf_set_options", 00:05:40.394 "params": { 00:05:40.394 "small_pool_count": 8192, 00:05:40.394 "large_pool_count": 1024, 00:05:40.394 "small_bufsize": 8192, 00:05:40.394 "large_bufsize": 135168, 00:05:40.394 "enable_numa": false 00:05:40.394 } 00:05:40.394 } 00:05:40.394 ] 00:05:40.394 }, 00:05:40.394 { 00:05:40.394 "subsystem": "sock", 00:05:40.394 "config": [ 00:05:40.394 { 00:05:40.394 "method": "sock_set_default_impl", 00:05:40.394 "params": { 00:05:40.394 "impl_name": "posix" 00:05:40.394 } 00:05:40.394 }, 00:05:40.394 { 00:05:40.394 "method": "sock_impl_set_options", 00:05:40.394 "params": { 00:05:40.394 "impl_name": "ssl", 00:05:40.394 "recv_buf_size": 4096, 00:05:40.394 "send_buf_size": 4096, 00:05:40.394 "enable_recv_pipe": true, 00:05:40.394 "enable_quickack": false, 00:05:40.394 "enable_placement_id": 0, 00:05:40.394 "enable_zerocopy_send_server": true, 00:05:40.394 "enable_zerocopy_send_client": false, 00:05:40.394 "zerocopy_threshold": 0, 00:05:40.394 "tls_version": 0, 00:05:40.394 "enable_ktls": false 00:05:40.394 } 00:05:40.394 }, 00:05:40.394 { 00:05:40.394 "method": "sock_impl_set_options", 00:05:40.394 "params": { 00:05:40.394 "impl_name": "posix", 00:05:40.394 "recv_buf_size": 2097152, 00:05:40.394 "send_buf_size": 2097152, 00:05:40.394 "enable_recv_pipe": true, 00:05:40.394 "enable_quickack": false, 00:05:40.394 "enable_placement_id": 0, 00:05:40.394 "enable_zerocopy_send_server": true, 00:05:40.394 "enable_zerocopy_send_client": false, 00:05:40.394 "zerocopy_threshold": 0, 00:05:40.394 "tls_version": 0, 00:05:40.394 "enable_ktls": false 00:05:40.394 } 00:05:40.394 } 00:05:40.394 ] 00:05:40.394 }, 00:05:40.394 { 00:05:40.394 "subsystem": "vmd", 00:05:40.394 "config": [] 00:05:40.394 }, 00:05:40.394 { 00:05:40.394 "subsystem": "accel", 00:05:40.394 "config": [ 00:05:40.394 { 00:05:40.394 "method": "accel_set_options", 00:05:40.394 "params": { 00:05:40.394 "small_cache_size": 128, 00:05:40.394 "large_cache_size": 16, 00:05:40.394 "task_count": 2048, 00:05:40.394 "sequence_count": 2048, 00:05:40.394 "buf_count": 2048 00:05:40.394 } 00:05:40.394 } 00:05:40.394 ] 00:05:40.394 }, 00:05:40.394 { 00:05:40.394 "subsystem": "bdev", 00:05:40.394 "config": [ 00:05:40.394 { 00:05:40.394 "method": "bdev_set_options", 00:05:40.394 "params": { 00:05:40.394 "bdev_io_pool_size": 65535, 00:05:40.394 "bdev_io_cache_size": 256, 00:05:40.394 "bdev_auto_examine": true, 00:05:40.394 "iobuf_small_cache_size": 128, 00:05:40.394 "iobuf_large_cache_size": 16 00:05:40.394 } 00:05:40.394 }, 00:05:40.394 { 00:05:40.394 "method": "bdev_raid_set_options", 00:05:40.394 "params": { 00:05:40.394 "process_window_size_kb": 1024, 00:05:40.394 "process_max_bandwidth_mb_sec": 0 00:05:40.394 } 00:05:40.394 }, 00:05:40.394 { 00:05:40.394 "method": "bdev_iscsi_set_options", 00:05:40.394 "params": { 00:05:40.394 "timeout_sec": 30 00:05:40.394 } 00:05:40.394 }, 00:05:40.394 { 00:05:40.394 "method": "bdev_nvme_set_options", 00:05:40.394 "params": { 00:05:40.394 "action_on_timeout": "none", 00:05:40.394 "timeout_us": 0, 00:05:40.394 "timeout_admin_us": 0, 00:05:40.394 "keep_alive_timeout_ms": 10000, 00:05:40.394 "arbitration_burst": 0, 00:05:40.394 "low_priority_weight": 0, 00:05:40.394 "medium_priority_weight": 0, 00:05:40.394 "high_priority_weight": 0, 00:05:40.394 "nvme_adminq_poll_period_us": 10000, 00:05:40.394 "nvme_ioq_poll_period_us": 0, 00:05:40.394 "io_queue_requests": 0, 00:05:40.394 "delay_cmd_submit": true, 00:05:40.394 "transport_retry_count": 4, 00:05:40.394 "bdev_retry_count": 3, 00:05:40.394 "transport_ack_timeout": 0, 00:05:40.394 "ctrlr_loss_timeout_sec": 0, 00:05:40.394 "reconnect_delay_sec": 0, 00:05:40.394 "fast_io_fail_timeout_sec": 0, 00:05:40.394 "disable_auto_failback": false, 00:05:40.394 "generate_uuids": false, 00:05:40.394 "transport_tos": 0, 00:05:40.394 "nvme_error_stat": false, 00:05:40.394 "rdma_srq_size": 0, 00:05:40.394 "io_path_stat": false, 00:05:40.394 "allow_accel_sequence": false, 00:05:40.394 "rdma_max_cq_size": 0, 00:05:40.395 "rdma_cm_event_timeout_ms": 0, 00:05:40.395 "dhchap_digests": [ 00:05:40.395 "sha256", 00:05:40.395 "sha384", 00:05:40.395 "sha512" 00:05:40.395 ], 00:05:40.395 "dhchap_dhgroups": [ 00:05:40.395 "null", 00:05:40.395 "ffdhe2048", 00:05:40.395 "ffdhe3072", 00:05:40.395 "ffdhe4096", 00:05:40.395 "ffdhe6144", 00:05:40.395 "ffdhe8192" 00:05:40.395 ] 00:05:40.395 } 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "method": "bdev_nvme_set_hotplug", 00:05:40.395 "params": { 00:05:40.395 "period_us": 100000, 00:05:40.395 "enable": false 00:05:40.395 } 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "method": "bdev_wait_for_examine" 00:05:40.395 } 00:05:40.395 ] 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "subsystem": "scsi", 00:05:40.395 "config": null 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "subsystem": "scheduler", 00:05:40.395 "config": [ 00:05:40.395 { 00:05:40.395 "method": "framework_set_scheduler", 00:05:40.395 "params": { 00:05:40.395 "name": "static" 00:05:40.395 } 00:05:40.395 } 00:05:40.395 ] 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "subsystem": "vhost_scsi", 00:05:40.395 "config": [] 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "subsystem": "vhost_blk", 00:05:40.395 "config": [] 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "subsystem": "ublk", 00:05:40.395 "config": [] 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "subsystem": "nbd", 00:05:40.395 "config": [] 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "subsystem": "nvmf", 00:05:40.395 "config": [ 00:05:40.395 { 00:05:40.395 "method": "nvmf_set_config", 00:05:40.395 "params": { 00:05:40.395 "discovery_filter": "match_any", 00:05:40.395 "admin_cmd_passthru": { 00:05:40.395 "identify_ctrlr": false 00:05:40.395 }, 00:05:40.395 "dhchap_digests": [ 00:05:40.395 "sha256", 00:05:40.395 "sha384", 00:05:40.395 "sha512" 00:05:40.395 ], 00:05:40.395 "dhchap_dhgroups": [ 00:05:40.395 "null", 00:05:40.395 "ffdhe2048", 00:05:40.395 "ffdhe3072", 00:05:40.395 "ffdhe4096", 00:05:40.395 "ffdhe6144", 00:05:40.395 "ffdhe8192" 00:05:40.395 ] 00:05:40.395 } 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "method": "nvmf_set_max_subsystems", 00:05:40.395 "params": { 00:05:40.395 "max_subsystems": 1024 00:05:40.395 } 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "method": "nvmf_set_crdt", 00:05:40.395 "params": { 00:05:40.395 "crdt1": 0, 00:05:40.395 "crdt2": 0, 00:05:40.395 "crdt3": 0 00:05:40.395 } 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "method": "nvmf_create_transport", 00:05:40.395 "params": { 00:05:40.395 "trtype": "TCP", 00:05:40.395 "max_queue_depth": 128, 00:05:40.395 "max_io_qpairs_per_ctrlr": 127, 00:05:40.395 "in_capsule_data_size": 4096, 00:05:40.395 "max_io_size": 131072, 00:05:40.395 "io_unit_size": 131072, 00:05:40.395 "max_aq_depth": 128, 00:05:40.395 "num_shared_buffers": 511, 00:05:40.395 "buf_cache_size": 4294967295, 00:05:40.395 "dif_insert_or_strip": false, 00:05:40.395 "zcopy": false, 00:05:40.395 "c2h_success": true, 00:05:40.395 "sock_priority": 0, 00:05:40.395 "abort_timeout_sec": 1, 00:05:40.395 "ack_timeout": 0, 00:05:40.395 "data_wr_pool_size": 0 00:05:40.395 } 00:05:40.395 } 00:05:40.395 ] 00:05:40.395 }, 00:05:40.395 { 00:05:40.395 "subsystem": "iscsi", 00:05:40.395 "config": [ 00:05:40.395 { 00:05:40.395 "method": "iscsi_set_options", 00:05:40.395 "params": { 00:05:40.395 "node_base": "iqn.2016-06.io.spdk", 00:05:40.395 "max_sessions": 128, 00:05:40.395 "max_connections_per_session": 2, 00:05:40.395 "max_queue_depth": 64, 00:05:40.395 "default_time2wait": 2, 00:05:40.395 "default_time2retain": 20, 00:05:40.395 "first_burst_length": 8192, 00:05:40.395 "immediate_data": true, 00:05:40.395 "allow_duplicated_isid": false, 00:05:40.395 "error_recovery_level": 0, 00:05:40.395 "nop_timeout": 60, 00:05:40.395 "nop_in_interval": 30, 00:05:40.395 "disable_chap": false, 00:05:40.395 "require_chap": false, 00:05:40.395 "mutual_chap": false, 00:05:40.395 "chap_group": 0, 00:05:40.395 "max_large_datain_per_connection": 64, 00:05:40.395 "max_r2t_per_connection": 4, 00:05:40.395 "pdu_pool_size": 36864, 00:05:40.395 "immediate_data_pool_size": 16384, 00:05:40.395 "data_out_pool_size": 2048 00:05:40.395 } 00:05:40.395 } 00:05:40.395 ] 00:05:40.395 } 00:05:40.395 ] 00:05:40.395 } 00:05:40.395 killing process with pid 57448 00:05:40.395 03:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:40.395 03:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57448 00:05:40.395 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57448 ']' 00:05:40.395 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57448 00:05:40.395 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:40.395 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.395 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57448 00:05:40.395 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.395 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.395 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57448' 00:05:40.395 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57448 00:05:40.395 03:56:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57448 00:05:41.772 03:56:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57488 00:05:41.772 03:56:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:41.772 03:56:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:47.096 03:56:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57488 00:05:47.096 03:56:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57488 ']' 00:05:47.096 03:56:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57488 00:05:47.096 03:56:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:47.096 03:56:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.096 03:56:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57488 00:05:47.096 03:56:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.096 killing process with pid 57488 00:05:47.096 03:56:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.096 03:56:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57488' 00:05:47.096 03:56:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57488 00:05:47.096 03:56:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57488 00:05:48.036 03:56:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.036 03:56:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.036 00:05:48.036 real 0m8.925s 00:05:48.036 user 0m8.471s 00:05:48.036 sys 0m0.631s 00:05:48.036 03:56:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.036 03:56:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.036 ************************************ 00:05:48.036 END TEST skip_rpc_with_json 00:05:48.036 ************************************ 00:05:48.036 03:56:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:48.036 03:56:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.036 03:56:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.036 03:56:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.036 ************************************ 00:05:48.036 START TEST skip_rpc_with_delay 00:05:48.036 ************************************ 00:05:48.036 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:48.036 03:56:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.036 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:48.036 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.037 [2024-12-06 03:56:35.513310] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:48.037 ************************************ 00:05:48.037 END TEST skip_rpc_with_delay 00:05:48.037 ************************************ 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.037 00:05:48.037 real 0m0.122s 00:05:48.037 user 0m0.063s 00:05:48.037 sys 0m0.056s 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.037 03:56:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:48.343 03:56:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:48.343 03:56:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:48.343 03:56:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:48.343 03:56:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.343 03:56:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.343 03:56:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.343 ************************************ 00:05:48.343 START TEST exit_on_failed_rpc_init 00:05:48.343 ************************************ 00:05:48.343 03:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:48.343 03:56:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57610 00:05:48.343 03:56:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57610 00:05:48.343 03:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57610 ']' 00:05:48.343 03:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.343 03:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.343 03:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.343 03:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.343 03:56:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.343 03:56:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.343 [2024-12-06 03:56:35.675270] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:05:48.343 [2024-12-06 03:56:35.675679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57610 ] 00:05:48.343 [2024-12-06 03:56:35.822266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.603 [2024-12-06 03:56:35.909628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:49.228 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.228 [2024-12-06 03:56:36.549018] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:05:49.228 [2024-12-06 03:56:36.549135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57623 ] 00:05:49.228 [2024-12-06 03:56:36.709221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.488 [2024-12-06 03:56:36.809681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.488 [2024-12-06 03:56:36.809768] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:49.488 [2024-12-06 03:56:36.809781] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:49.488 [2024-12-06 03:56:36.809793] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:49.488 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:49.488 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.488 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:49.488 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:49.488 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:49.488 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.488 03:56:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:49.488 03:56:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57610 00:05:49.488 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57610 ']' 00:05:49.489 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57610 00:05:49.489 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:49.489 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.489 03:56:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57610 00:05:49.489 03:56:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.489 killing process with pid 57610 00:05:49.489 03:56:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.489 03:56:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57610' 00:05:49.489 03:56:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57610 00:05:49.489 03:56:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57610 00:05:50.869 00:05:50.869 real 0m2.640s 00:05:50.869 user 0m2.914s 00:05:50.869 sys 0m0.407s 00:05:50.869 ************************************ 00:05:50.869 03:56:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.869 03:56:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:50.869 END TEST exit_on_failed_rpc_init 00:05:50.869 ************************************ 00:05:50.869 03:56:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:50.870 00:05:50.870 real 0m18.290s 00:05:50.870 user 0m17.495s 00:05:50.870 sys 0m1.519s 00:05:50.870 03:56:38 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.870 ************************************ 00:05:50.870 END TEST skip_rpc 00:05:50.870 ************************************ 00:05:50.870 03:56:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.870 03:56:38 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:50.870 03:56:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.870 03:56:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.870 03:56:38 -- common/autotest_common.sh@10 -- # set +x 00:05:50.870 ************************************ 00:05:50.870 START TEST rpc_client 00:05:50.870 ************************************ 00:05:50.870 03:56:38 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:50.870 * Looking for test storage... 00:05:50.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:50.870 03:56:38 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.870 03:56:38 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.870 03:56:38 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.151 03:56:38 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.151 03:56:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.151 03:56:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.151 03:56:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.152 03:56:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:51.152 03:56:38 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.152 03:56:38 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.152 --rc genhtml_branch_coverage=1 00:05:51.152 --rc genhtml_function_coverage=1 00:05:51.152 --rc genhtml_legend=1 00:05:51.152 --rc geninfo_all_blocks=1 00:05:51.152 --rc geninfo_unexecuted_blocks=1 00:05:51.152 00:05:51.152 ' 00:05:51.152 03:56:38 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.152 --rc genhtml_branch_coverage=1 00:05:51.152 --rc genhtml_function_coverage=1 00:05:51.152 --rc genhtml_legend=1 00:05:51.152 --rc geninfo_all_blocks=1 00:05:51.152 --rc geninfo_unexecuted_blocks=1 00:05:51.152 00:05:51.152 ' 00:05:51.152 03:56:38 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.152 --rc genhtml_branch_coverage=1 00:05:51.152 --rc genhtml_function_coverage=1 00:05:51.152 --rc genhtml_legend=1 00:05:51.152 --rc geninfo_all_blocks=1 00:05:51.152 --rc geninfo_unexecuted_blocks=1 00:05:51.152 00:05:51.152 ' 00:05:51.152 03:56:38 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.152 --rc genhtml_branch_coverage=1 00:05:51.152 --rc genhtml_function_coverage=1 00:05:51.152 --rc genhtml_legend=1 00:05:51.152 --rc geninfo_all_blocks=1 00:05:51.152 --rc geninfo_unexecuted_blocks=1 00:05:51.152 00:05:51.152 ' 00:05:51.152 03:56:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:51.152 OK 00:05:51.152 03:56:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:51.152 00:05:51.152 real 0m0.193s 00:05:51.152 user 0m0.114s 00:05:51.152 sys 0m0.087s 00:05:51.152 03:56:38 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.152 ************************************ 00:05:51.152 END TEST rpc_client 00:05:51.152 ************************************ 00:05:51.152 03:56:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:51.152 03:56:38 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:51.152 03:56:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.152 03:56:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.152 03:56:38 -- common/autotest_common.sh@10 -- # set +x 00:05:51.152 ************************************ 00:05:51.152 START TEST json_config 00:05:51.152 ************************************ 00:05:51.152 03:56:38 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:51.152 03:56:38 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.152 03:56:38 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.152 03:56:38 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.152 03:56:38 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.152 03:56:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.152 03:56:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.152 03:56:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.152 03:56:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.152 03:56:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.152 03:56:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.152 03:56:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.152 03:56:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.152 03:56:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.152 03:56:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.152 03:56:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.152 03:56:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:51.152 03:56:38 json_config -- scripts/common.sh@345 -- # : 1 00:05:51.152 03:56:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.152 03:56:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.152 03:56:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:51.152 03:56:38 json_config -- scripts/common.sh@353 -- # local d=1 00:05:51.152 03:56:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.152 03:56:38 json_config -- scripts/common.sh@355 -- # echo 1 00:05:51.152 03:56:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.152 03:56:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:51.152 03:56:38 json_config -- scripts/common.sh@353 -- # local d=2 00:05:51.152 03:56:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.152 03:56:38 json_config -- scripts/common.sh@355 -- # echo 2 00:05:51.152 03:56:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.152 03:56:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.152 03:56:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.152 03:56:38 json_config -- scripts/common.sh@368 -- # return 0 00:05:51.152 03:56:38 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.152 03:56:38 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.152 --rc genhtml_branch_coverage=1 00:05:51.152 --rc genhtml_function_coverage=1 00:05:51.152 --rc genhtml_legend=1 00:05:51.152 --rc geninfo_all_blocks=1 00:05:51.152 --rc geninfo_unexecuted_blocks=1 00:05:51.152 00:05:51.152 ' 00:05:51.152 03:56:38 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.152 --rc genhtml_branch_coverage=1 00:05:51.152 --rc genhtml_function_coverage=1 00:05:51.153 --rc genhtml_legend=1 00:05:51.153 --rc geninfo_all_blocks=1 00:05:51.153 --rc geninfo_unexecuted_blocks=1 00:05:51.153 00:05:51.153 ' 00:05:51.153 03:56:38 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.153 --rc genhtml_branch_coverage=1 00:05:51.153 --rc genhtml_function_coverage=1 00:05:51.153 --rc genhtml_legend=1 00:05:51.153 --rc geninfo_all_blocks=1 00:05:51.153 --rc geninfo_unexecuted_blocks=1 00:05:51.153 00:05:51.153 ' 00:05:51.153 03:56:38 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.153 --rc genhtml_branch_coverage=1 00:05:51.153 --rc genhtml_function_coverage=1 00:05:51.153 --rc genhtml_legend=1 00:05:51.153 --rc geninfo_all_blocks=1 00:05:51.153 --rc geninfo_unexecuted_blocks=1 00:05:51.153 00:05:51.153 ' 00:05:51.153 03:56:38 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:74b81f80-223e-4515-b804-645729820039 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=74b81f80-223e-4515-b804-645729820039 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.153 03:56:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.153 03:56:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.153 03:56:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.153 03:56:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.153 03:56:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.153 03:56:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.153 03:56:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.153 03:56:38 json_config -- paths/export.sh@5 -- # export PATH 00:05:51.153 03:56:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@51 -- # : 0 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.153 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.153 03:56:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.153 03:56:38 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:51.153 03:56:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:51.153 03:56:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:51.153 03:56:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:51.153 03:56:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:51.153 03:56:38 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:51.153 WARNING: No tests are enabled so not running JSON configuration tests 00:05:51.153 03:56:38 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:51.153 00:05:51.153 real 0m0.131s 00:05:51.153 user 0m0.086s 00:05:51.153 sys 0m0.051s 00:05:51.153 03:56:38 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.153 03:56:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.153 ************************************ 00:05:51.153 END TEST json_config 00:05:51.153 ************************************ 00:05:51.415 03:56:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:51.415 03:56:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.415 03:56:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.415 03:56:38 -- common/autotest_common.sh@10 -- # set +x 00:05:51.415 ************************************ 00:05:51.415 START TEST json_config_extra_key 00:05:51.415 ************************************ 00:05:51.415 03:56:38 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:51.415 03:56:38 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.415 03:56:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.415 03:56:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.415 03:56:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.415 03:56:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.415 03:56:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.415 03:56:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.415 03:56:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.415 03:56:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.415 03:56:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.415 03:56:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:51.416 03:56:38 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.416 03:56:38 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.416 --rc genhtml_branch_coverage=1 00:05:51.416 --rc genhtml_function_coverage=1 00:05:51.416 --rc genhtml_legend=1 00:05:51.416 --rc geninfo_all_blocks=1 00:05:51.416 --rc geninfo_unexecuted_blocks=1 00:05:51.416 00:05:51.416 ' 00:05:51.416 03:56:38 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.416 --rc genhtml_branch_coverage=1 00:05:51.416 --rc genhtml_function_coverage=1 00:05:51.416 --rc genhtml_legend=1 00:05:51.416 --rc geninfo_all_blocks=1 00:05:51.416 --rc geninfo_unexecuted_blocks=1 00:05:51.416 00:05:51.416 ' 00:05:51.416 03:56:38 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.416 --rc genhtml_branch_coverage=1 00:05:51.416 --rc genhtml_function_coverage=1 00:05:51.416 --rc genhtml_legend=1 00:05:51.416 --rc geninfo_all_blocks=1 00:05:51.416 --rc geninfo_unexecuted_blocks=1 00:05:51.416 00:05:51.416 ' 00:05:51.416 03:56:38 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.416 --rc genhtml_branch_coverage=1 00:05:51.416 --rc genhtml_function_coverage=1 00:05:51.416 --rc genhtml_legend=1 00:05:51.416 --rc geninfo_all_blocks=1 00:05:51.416 --rc geninfo_unexecuted_blocks=1 00:05:51.416 00:05:51.416 ' 00:05:51.416 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:74b81f80-223e-4515-b804-645729820039 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=74b81f80-223e-4515-b804-645729820039 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.416 03:56:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.416 03:56:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.416 03:56:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.416 03:56:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.416 03:56:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:51.416 03:56:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.416 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.416 03:56:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.417 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:51.417 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:51.417 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:51.417 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:51.417 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:51.417 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:51.417 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:51.417 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:51.417 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:51.417 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:51.417 INFO: launching applications... 00:05:51.417 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:51.417 03:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.417 03:56:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:51.417 03:56:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:51.417 03:56:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:51.417 03:56:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:51.417 03:56:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:51.417 03:56:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.417 03:56:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.417 03:56:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57816 00:05:51.417 Waiting for target to run... 00:05:51.417 03:56:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:51.417 03:56:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57816 /var/tmp/spdk_tgt.sock 00:05:51.417 03:56:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57816 ']' 00:05:51.417 03:56:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.417 03:56:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.417 03:56:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.417 03:56:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.417 03:56:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:51.417 03:56:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.417 [2024-12-06 03:56:38.918581] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:05:51.417 [2024-12-06 03:56:38.918709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57816 ] 00:05:51.989 [2024-12-06 03:56:39.244676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.989 [2024-12-06 03:56:39.337286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.560 03:56:39 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.560 00:05:52.560 03:56:39 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:52.560 03:56:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:52.560 INFO: shutting down applications... 00:05:52.560 03:56:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:52.560 03:56:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:52.560 03:56:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:52.560 03:56:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:52.560 03:56:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57816 ]] 00:05:52.560 03:56:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57816 00:05:52.560 03:56:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:52.560 03:56:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.560 03:56:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57816 00:05:52.560 03:56:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.820 03:56:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.820 03:56:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.820 03:56:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57816 00:05:52.820 03:56:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.392 03:56:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.392 03:56:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.392 03:56:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57816 00:05:53.392 03:56:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.965 03:56:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.965 03:56:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.965 03:56:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57816 00:05:53.965 03:56:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.537 SPDK target shutdown done 00:05:54.537 Success 00:05:54.537 03:56:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.537 03:56:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.537 03:56:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57816 00:05:54.537 03:56:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:54.537 03:56:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:54.537 03:56:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:54.537 03:56:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:54.537 03:56:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:54.537 ************************************ 00:05:54.537 END TEST json_config_extra_key 00:05:54.537 ************************************ 00:05:54.537 00:05:54.537 real 0m3.147s 00:05:54.537 user 0m2.726s 00:05:54.537 sys 0m0.375s 00:05:54.537 03:56:41 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.537 03:56:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:54.537 03:56:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.537 03:56:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.537 03:56:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.537 03:56:41 -- common/autotest_common.sh@10 -- # set +x 00:05:54.537 ************************************ 00:05:54.537 START TEST alias_rpc 00:05:54.537 ************************************ 00:05:54.537 03:56:41 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.537 * Looking for test storage... 00:05:54.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:54.537 03:56:41 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.537 03:56:41 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.537 03:56:41 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.537 03:56:42 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.537 03:56:42 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.537 03:56:42 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.538 03:56:42 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:54.538 03:56:42 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.538 03:56:42 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.538 --rc genhtml_branch_coverage=1 00:05:54.538 --rc genhtml_function_coverage=1 00:05:54.538 --rc genhtml_legend=1 00:05:54.538 --rc geninfo_all_blocks=1 00:05:54.538 --rc geninfo_unexecuted_blocks=1 00:05:54.538 00:05:54.538 ' 00:05:54.538 03:56:42 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.538 --rc genhtml_branch_coverage=1 00:05:54.538 --rc genhtml_function_coverage=1 00:05:54.538 --rc genhtml_legend=1 00:05:54.538 --rc geninfo_all_blocks=1 00:05:54.538 --rc geninfo_unexecuted_blocks=1 00:05:54.538 00:05:54.538 ' 00:05:54.538 03:56:42 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.538 --rc genhtml_branch_coverage=1 00:05:54.538 --rc genhtml_function_coverage=1 00:05:54.538 --rc genhtml_legend=1 00:05:54.538 --rc geninfo_all_blocks=1 00:05:54.538 --rc geninfo_unexecuted_blocks=1 00:05:54.538 00:05:54.538 ' 00:05:54.538 03:56:42 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.538 --rc genhtml_branch_coverage=1 00:05:54.538 --rc genhtml_function_coverage=1 00:05:54.538 --rc genhtml_legend=1 00:05:54.538 --rc geninfo_all_blocks=1 00:05:54.538 --rc geninfo_unexecuted_blocks=1 00:05:54.538 00:05:54.538 ' 00:05:54.538 03:56:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:54.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.538 03:56:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57909 00:05:54.538 03:56:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57909 00:05:54.538 03:56:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.538 03:56:42 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57909 ']' 00:05:54.538 03:56:42 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.538 03:56:42 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.538 03:56:42 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.538 03:56:42 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.538 03:56:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.799 [2024-12-06 03:56:42.116440] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:05:54.799 [2024-12-06 03:56:42.116563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57909 ] 00:05:54.799 [2024-12-06 03:56:42.279456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.061 [2024-12-06 03:56:42.378109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.633 03:56:42 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.633 03:56:42 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.633 03:56:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:55.895 03:56:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57909 00:05:55.895 03:56:43 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57909 ']' 00:05:55.895 03:56:43 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57909 00:05:55.895 03:56:43 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:55.895 03:56:43 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.895 03:56:43 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57909 00:05:55.895 killing process with pid 57909 00:05:55.895 03:56:43 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.895 03:56:43 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.895 03:56:43 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57909' 00:05:55.895 03:56:43 alias_rpc -- common/autotest_common.sh@973 -- # kill 57909 00:05:55.895 03:56:43 alias_rpc -- common/autotest_common.sh@978 -- # wait 57909 00:05:57.296 ************************************ 00:05:57.296 END TEST alias_rpc 00:05:57.296 ************************************ 00:05:57.296 00:05:57.296 real 0m2.828s 00:05:57.296 user 0m2.957s 00:05:57.297 sys 0m0.393s 00:05:57.297 03:56:44 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.297 03:56:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.297 03:56:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:57.297 03:56:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:57.297 03:56:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.297 03:56:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.297 03:56:44 -- common/autotest_common.sh@10 -- # set +x 00:05:57.297 ************************************ 00:05:57.297 START TEST spdkcli_tcp 00:05:57.297 ************************************ 00:05:57.297 03:56:44 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:57.583 * Looking for test storage... 00:05:57.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:57.583 03:56:44 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.583 03:56:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.583 03:56:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.583 03:56:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:57.583 03:56:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.584 03:56:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:57.584 03:56:44 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.584 03:56:44 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.584 03:56:44 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.584 03:56:44 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.584 --rc genhtml_branch_coverage=1 00:05:57.584 --rc genhtml_function_coverage=1 00:05:57.584 --rc genhtml_legend=1 00:05:57.584 --rc geninfo_all_blocks=1 00:05:57.584 --rc geninfo_unexecuted_blocks=1 00:05:57.584 00:05:57.584 ' 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.584 --rc genhtml_branch_coverage=1 00:05:57.584 --rc genhtml_function_coverage=1 00:05:57.584 --rc genhtml_legend=1 00:05:57.584 --rc geninfo_all_blocks=1 00:05:57.584 --rc geninfo_unexecuted_blocks=1 00:05:57.584 00:05:57.584 ' 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.584 --rc genhtml_branch_coverage=1 00:05:57.584 --rc genhtml_function_coverage=1 00:05:57.584 --rc genhtml_legend=1 00:05:57.584 --rc geninfo_all_blocks=1 00:05:57.584 --rc geninfo_unexecuted_blocks=1 00:05:57.584 00:05:57.584 ' 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.584 --rc genhtml_branch_coverage=1 00:05:57.584 --rc genhtml_function_coverage=1 00:05:57.584 --rc genhtml_legend=1 00:05:57.584 --rc geninfo_all_blocks=1 00:05:57.584 --rc geninfo_unexecuted_blocks=1 00:05:57.584 00:05:57.584 ' 00:05:57.584 03:56:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:57.584 03:56:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:57.584 03:56:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:57.584 03:56:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:57.584 03:56:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:57.584 03:56:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:57.584 03:56:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.584 03:56:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58005 00:05:57.584 03:56:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58005 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58005 ']' 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.584 03:56:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.584 03:56:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.584 [2024-12-06 03:56:44.974803] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:05:57.584 [2024-12-06 03:56:44.974928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58005 ] 00:05:57.845 [2024-12-06 03:56:45.136976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.845 [2024-12-06 03:56:45.241031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.845 [2024-12-06 03:56:45.241152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.415 03:56:45 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.415 03:56:45 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:58.415 03:56:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58022 00:05:58.415 03:56:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:58.415 03:56:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:58.677 [ 00:05:58.677 "bdev_malloc_delete", 00:05:58.677 "bdev_malloc_create", 00:05:58.677 "bdev_null_resize", 00:05:58.677 "bdev_null_delete", 00:05:58.677 "bdev_null_create", 00:05:58.677 "bdev_nvme_cuse_unregister", 00:05:58.677 "bdev_nvme_cuse_register", 00:05:58.677 "bdev_opal_new_user", 00:05:58.677 "bdev_opal_set_lock_state", 00:05:58.677 "bdev_opal_delete", 00:05:58.677 "bdev_opal_get_info", 00:05:58.677 "bdev_opal_create", 00:05:58.677 "bdev_nvme_opal_revert", 00:05:58.677 "bdev_nvme_opal_init", 00:05:58.677 "bdev_nvme_send_cmd", 00:05:58.677 "bdev_nvme_set_keys", 00:05:58.677 "bdev_nvme_get_path_iostat", 00:05:58.677 "bdev_nvme_get_mdns_discovery_info", 00:05:58.677 "bdev_nvme_stop_mdns_discovery", 00:05:58.677 "bdev_nvme_start_mdns_discovery", 00:05:58.677 "bdev_nvme_set_multipath_policy", 00:05:58.677 "bdev_nvme_set_preferred_path", 00:05:58.677 "bdev_nvme_get_io_paths", 00:05:58.677 "bdev_nvme_remove_error_injection", 00:05:58.677 "bdev_nvme_add_error_injection", 00:05:58.677 "bdev_nvme_get_discovery_info", 00:05:58.677 "bdev_nvme_stop_discovery", 00:05:58.677 "bdev_nvme_start_discovery", 00:05:58.677 "bdev_nvme_get_controller_health_info", 00:05:58.677 "bdev_nvme_disable_controller", 00:05:58.677 "bdev_nvme_enable_controller", 00:05:58.677 "bdev_nvme_reset_controller", 00:05:58.677 "bdev_nvme_get_transport_statistics", 00:05:58.677 "bdev_nvme_apply_firmware", 00:05:58.677 "bdev_nvme_detach_controller", 00:05:58.677 "bdev_nvme_get_controllers", 00:05:58.677 "bdev_nvme_attach_controller", 00:05:58.677 "bdev_nvme_set_hotplug", 00:05:58.677 "bdev_nvme_set_options", 00:05:58.677 "bdev_passthru_delete", 00:05:58.677 "bdev_passthru_create", 00:05:58.677 "bdev_lvol_set_parent_bdev", 00:05:58.677 "bdev_lvol_set_parent", 00:05:58.677 "bdev_lvol_check_shallow_copy", 00:05:58.677 "bdev_lvol_start_shallow_copy", 00:05:58.677 "bdev_lvol_grow_lvstore", 00:05:58.677 "bdev_lvol_get_lvols", 00:05:58.677 "bdev_lvol_get_lvstores", 00:05:58.677 "bdev_lvol_delete", 00:05:58.677 "bdev_lvol_set_read_only", 00:05:58.677 "bdev_lvol_resize", 00:05:58.677 "bdev_lvol_decouple_parent", 00:05:58.677 "bdev_lvol_inflate", 00:05:58.677 "bdev_lvol_rename", 00:05:58.677 "bdev_lvol_clone_bdev", 00:05:58.677 "bdev_lvol_clone", 00:05:58.677 "bdev_lvol_snapshot", 00:05:58.677 "bdev_lvol_create", 00:05:58.677 "bdev_lvol_delete_lvstore", 00:05:58.677 "bdev_lvol_rename_lvstore", 00:05:58.677 "bdev_lvol_create_lvstore", 00:05:58.677 "bdev_raid_set_options", 00:05:58.677 "bdev_raid_remove_base_bdev", 00:05:58.677 "bdev_raid_add_base_bdev", 00:05:58.677 "bdev_raid_delete", 00:05:58.677 "bdev_raid_create", 00:05:58.677 "bdev_raid_get_bdevs", 00:05:58.677 "bdev_error_inject_error", 00:05:58.677 "bdev_error_delete", 00:05:58.677 "bdev_error_create", 00:05:58.677 "bdev_split_delete", 00:05:58.677 "bdev_split_create", 00:05:58.677 "bdev_delay_delete", 00:05:58.677 "bdev_delay_create", 00:05:58.677 "bdev_delay_update_latency", 00:05:58.677 "bdev_zone_block_delete", 00:05:58.677 "bdev_zone_block_create", 00:05:58.677 "blobfs_create", 00:05:58.677 "blobfs_detect", 00:05:58.677 "blobfs_set_cache_size", 00:05:58.677 "bdev_xnvme_delete", 00:05:58.677 "bdev_xnvme_create", 00:05:58.677 "bdev_aio_delete", 00:05:58.677 "bdev_aio_rescan", 00:05:58.677 "bdev_aio_create", 00:05:58.677 "bdev_ftl_set_property", 00:05:58.677 "bdev_ftl_get_properties", 00:05:58.677 "bdev_ftl_get_stats", 00:05:58.677 "bdev_ftl_unmap", 00:05:58.677 "bdev_ftl_unload", 00:05:58.677 "bdev_ftl_delete", 00:05:58.677 "bdev_ftl_load", 00:05:58.677 "bdev_ftl_create", 00:05:58.677 "bdev_virtio_attach_controller", 00:05:58.677 "bdev_virtio_scsi_get_devices", 00:05:58.677 "bdev_virtio_detach_controller", 00:05:58.677 "bdev_virtio_blk_set_hotplug", 00:05:58.677 "bdev_iscsi_delete", 00:05:58.677 "bdev_iscsi_create", 00:05:58.677 "bdev_iscsi_set_options", 00:05:58.677 "accel_error_inject_error", 00:05:58.677 "ioat_scan_accel_module", 00:05:58.677 "dsa_scan_accel_module", 00:05:58.677 "iaa_scan_accel_module", 00:05:58.677 "keyring_file_remove_key", 00:05:58.677 "keyring_file_add_key", 00:05:58.677 "keyring_linux_set_options", 00:05:58.677 "fsdev_aio_delete", 00:05:58.677 "fsdev_aio_create", 00:05:58.677 "iscsi_get_histogram", 00:05:58.677 "iscsi_enable_histogram", 00:05:58.677 "iscsi_set_options", 00:05:58.677 "iscsi_get_auth_groups", 00:05:58.677 "iscsi_auth_group_remove_secret", 00:05:58.677 "iscsi_auth_group_add_secret", 00:05:58.677 "iscsi_delete_auth_group", 00:05:58.677 "iscsi_create_auth_group", 00:05:58.677 "iscsi_set_discovery_auth", 00:05:58.677 "iscsi_get_options", 00:05:58.677 "iscsi_target_node_request_logout", 00:05:58.677 "iscsi_target_node_set_redirect", 00:05:58.677 "iscsi_target_node_set_auth", 00:05:58.677 "iscsi_target_node_add_lun", 00:05:58.677 "iscsi_get_stats", 00:05:58.677 "iscsi_get_connections", 00:05:58.677 "iscsi_portal_group_set_auth", 00:05:58.677 "iscsi_start_portal_group", 00:05:58.677 "iscsi_delete_portal_group", 00:05:58.677 "iscsi_create_portal_group", 00:05:58.677 "iscsi_get_portal_groups", 00:05:58.677 "iscsi_delete_target_node", 00:05:58.677 "iscsi_target_node_remove_pg_ig_maps", 00:05:58.677 "iscsi_target_node_add_pg_ig_maps", 00:05:58.677 "iscsi_create_target_node", 00:05:58.677 "iscsi_get_target_nodes", 00:05:58.677 "iscsi_delete_initiator_group", 00:05:58.677 "iscsi_initiator_group_remove_initiators", 00:05:58.677 "iscsi_initiator_group_add_initiators", 00:05:58.677 "iscsi_create_initiator_group", 00:05:58.677 "iscsi_get_initiator_groups", 00:05:58.677 "nvmf_set_crdt", 00:05:58.677 "nvmf_set_config", 00:05:58.677 "nvmf_set_max_subsystems", 00:05:58.677 "nvmf_stop_mdns_prr", 00:05:58.677 "nvmf_publish_mdns_prr", 00:05:58.677 "nvmf_subsystem_get_listeners", 00:05:58.677 "nvmf_subsystem_get_qpairs", 00:05:58.677 "nvmf_subsystem_get_controllers", 00:05:58.677 "nvmf_get_stats", 00:05:58.677 "nvmf_get_transports", 00:05:58.677 "nvmf_create_transport", 00:05:58.677 "nvmf_get_targets", 00:05:58.677 "nvmf_delete_target", 00:05:58.677 "nvmf_create_target", 00:05:58.677 "nvmf_subsystem_allow_any_host", 00:05:58.677 "nvmf_subsystem_set_keys", 00:05:58.677 "nvmf_subsystem_remove_host", 00:05:58.677 "nvmf_subsystem_add_host", 00:05:58.677 "nvmf_ns_remove_host", 00:05:58.677 "nvmf_ns_add_host", 00:05:58.677 "nvmf_subsystem_remove_ns", 00:05:58.677 "nvmf_subsystem_set_ns_ana_group", 00:05:58.677 "nvmf_subsystem_add_ns", 00:05:58.677 "nvmf_subsystem_listener_set_ana_state", 00:05:58.677 "nvmf_discovery_get_referrals", 00:05:58.677 "nvmf_discovery_remove_referral", 00:05:58.677 "nvmf_discovery_add_referral", 00:05:58.677 "nvmf_subsystem_remove_listener", 00:05:58.677 "nvmf_subsystem_add_listener", 00:05:58.677 "nvmf_delete_subsystem", 00:05:58.677 "nvmf_create_subsystem", 00:05:58.677 "nvmf_get_subsystems", 00:05:58.677 "env_dpdk_get_mem_stats", 00:05:58.677 "nbd_get_disks", 00:05:58.677 "nbd_stop_disk", 00:05:58.677 "nbd_start_disk", 00:05:58.677 "ublk_recover_disk", 00:05:58.677 "ublk_get_disks", 00:05:58.677 "ublk_stop_disk", 00:05:58.677 "ublk_start_disk", 00:05:58.677 "ublk_destroy_target", 00:05:58.677 "ublk_create_target", 00:05:58.677 "virtio_blk_create_transport", 00:05:58.677 "virtio_blk_get_transports", 00:05:58.677 "vhost_controller_set_coalescing", 00:05:58.677 "vhost_get_controllers", 00:05:58.677 "vhost_delete_controller", 00:05:58.677 "vhost_create_blk_controller", 00:05:58.677 "vhost_scsi_controller_remove_target", 00:05:58.677 "vhost_scsi_controller_add_target", 00:05:58.677 "vhost_start_scsi_controller", 00:05:58.677 "vhost_create_scsi_controller", 00:05:58.677 "thread_set_cpumask", 00:05:58.677 "scheduler_set_options", 00:05:58.677 "framework_get_governor", 00:05:58.677 "framework_get_scheduler", 00:05:58.677 "framework_set_scheduler", 00:05:58.677 "framework_get_reactors", 00:05:58.677 "thread_get_io_channels", 00:05:58.677 "thread_get_pollers", 00:05:58.677 "thread_get_stats", 00:05:58.677 "framework_monitor_context_switch", 00:05:58.677 "spdk_kill_instance", 00:05:58.677 "log_enable_timestamps", 00:05:58.677 "log_get_flags", 00:05:58.677 "log_clear_flag", 00:05:58.677 "log_set_flag", 00:05:58.677 "log_get_level", 00:05:58.677 "log_set_level", 00:05:58.677 "log_get_print_level", 00:05:58.678 "log_set_print_level", 00:05:58.678 "framework_enable_cpumask_locks", 00:05:58.678 "framework_disable_cpumask_locks", 00:05:58.678 "framework_wait_init", 00:05:58.678 "framework_start_init", 00:05:58.678 "scsi_get_devices", 00:05:58.678 "bdev_get_histogram", 00:05:58.678 "bdev_enable_histogram", 00:05:58.678 "bdev_set_qos_limit", 00:05:58.678 "bdev_set_qd_sampling_period", 00:05:58.678 "bdev_get_bdevs", 00:05:58.678 "bdev_reset_iostat", 00:05:58.678 "bdev_get_iostat", 00:05:58.678 "bdev_examine", 00:05:58.678 "bdev_wait_for_examine", 00:05:58.678 "bdev_set_options", 00:05:58.678 "accel_get_stats", 00:05:58.678 "accel_set_options", 00:05:58.678 "accel_set_driver", 00:05:58.678 "accel_crypto_key_destroy", 00:05:58.678 "accel_crypto_keys_get", 00:05:58.678 "accel_crypto_key_create", 00:05:58.678 "accel_assign_opc", 00:05:58.678 "accel_get_module_info", 00:05:58.678 "accel_get_opc_assignments", 00:05:58.678 "vmd_rescan", 00:05:58.678 "vmd_remove_device", 00:05:58.678 "vmd_enable", 00:05:58.678 "sock_get_default_impl", 00:05:58.678 "sock_set_default_impl", 00:05:58.678 "sock_impl_set_options", 00:05:58.678 "sock_impl_get_options", 00:05:58.678 "iobuf_get_stats", 00:05:58.678 "iobuf_set_options", 00:05:58.678 "keyring_get_keys", 00:05:58.678 "framework_get_pci_devices", 00:05:58.678 "framework_get_config", 00:05:58.678 "framework_get_subsystems", 00:05:58.678 "fsdev_set_opts", 00:05:58.678 "fsdev_get_opts", 00:05:58.678 "trace_get_info", 00:05:58.678 "trace_get_tpoint_group_mask", 00:05:58.678 "trace_disable_tpoint_group", 00:05:58.678 "trace_enable_tpoint_group", 00:05:58.678 "trace_clear_tpoint_mask", 00:05:58.678 "trace_set_tpoint_mask", 00:05:58.678 "notify_get_notifications", 00:05:58.678 "notify_get_types", 00:05:58.678 "spdk_get_version", 00:05:58.678 "rpc_get_methods" 00:05:58.678 ] 00:05:58.678 03:56:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:58.678 03:56:46 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:58.678 03:56:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.678 03:56:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:58.678 03:56:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58005 00:05:58.678 03:56:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58005 ']' 00:05:58.678 03:56:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58005 00:05:58.678 03:56:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:58.678 03:56:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.678 03:56:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58005 00:05:58.678 killing process with pid 58005 00:05:58.678 03:56:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.678 03:56:46 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.678 03:56:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58005' 00:05:58.678 03:56:46 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58005 00:05:58.678 03:56:46 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58005 00:06:00.590 ************************************ 00:06:00.590 END TEST spdkcli_tcp 00:06:00.590 ************************************ 00:06:00.590 00:06:00.590 real 0m2.914s 00:06:00.590 user 0m5.309s 00:06:00.590 sys 0m0.438s 00:06:00.590 03:56:47 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.590 03:56:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.590 03:56:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.590 03:56:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.590 03:56:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.590 03:56:47 -- common/autotest_common.sh@10 -- # set +x 00:06:00.590 ************************************ 00:06:00.590 START TEST dpdk_mem_utility 00:06:00.590 ************************************ 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.590 * Looking for test storage... 00:06:00.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.590 03:56:47 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.590 --rc genhtml_branch_coverage=1 00:06:00.590 --rc genhtml_function_coverage=1 00:06:00.590 --rc genhtml_legend=1 00:06:00.590 --rc geninfo_all_blocks=1 00:06:00.590 --rc geninfo_unexecuted_blocks=1 00:06:00.590 00:06:00.590 ' 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.590 --rc genhtml_branch_coverage=1 00:06:00.590 --rc genhtml_function_coverage=1 00:06:00.590 --rc genhtml_legend=1 00:06:00.590 --rc geninfo_all_blocks=1 00:06:00.590 --rc geninfo_unexecuted_blocks=1 00:06:00.590 00:06:00.590 ' 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.590 --rc genhtml_branch_coverage=1 00:06:00.590 --rc genhtml_function_coverage=1 00:06:00.590 --rc genhtml_legend=1 00:06:00.590 --rc geninfo_all_blocks=1 00:06:00.590 --rc geninfo_unexecuted_blocks=1 00:06:00.590 00:06:00.590 ' 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.590 --rc genhtml_branch_coverage=1 00:06:00.590 --rc genhtml_function_coverage=1 00:06:00.590 --rc genhtml_legend=1 00:06:00.590 --rc geninfo_all_blocks=1 00:06:00.590 --rc geninfo_unexecuted_blocks=1 00:06:00.590 00:06:00.590 ' 00:06:00.590 03:56:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:00.590 03:56:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58116 00:06:00.590 03:56:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58116 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58116 ']' 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.590 03:56:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.590 03:56:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.590 [2024-12-06 03:56:47.923075] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:00.591 [2024-12-06 03:56:47.923753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58116 ] 00:06:00.591 [2024-12-06 03:56:48.079220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.849 [2024-12-06 03:56:48.178963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.417 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.417 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:01.417 03:56:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:01.417 03:56:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:01.417 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.417 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.417 { 00:06:01.417 "filename": "/tmp/spdk_mem_dump.txt" 00:06:01.417 } 00:06:01.417 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.417 03:56:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:01.417 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:01.417 1 heaps totaling size 824.000000 MiB 00:06:01.417 size: 824.000000 MiB heap id: 0 00:06:01.417 end heaps---------- 00:06:01.417 9 mempools totaling size 603.782043 MiB 00:06:01.417 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:01.417 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:01.417 size: 100.555481 MiB name: bdev_io_58116 00:06:01.417 size: 50.003479 MiB name: msgpool_58116 00:06:01.417 size: 36.509338 MiB name: fsdev_io_58116 00:06:01.417 size: 21.763794 MiB name: PDU_Pool 00:06:01.417 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:01.417 size: 4.133484 MiB name: evtpool_58116 00:06:01.417 size: 0.026123 MiB name: Session_Pool 00:06:01.417 end mempools------- 00:06:01.417 6 memzones totaling size 4.142822 MiB 00:06:01.417 size: 1.000366 MiB name: RG_ring_0_58116 00:06:01.417 size: 1.000366 MiB name: RG_ring_1_58116 00:06:01.417 size: 1.000366 MiB name: RG_ring_4_58116 00:06:01.417 size: 1.000366 MiB name: RG_ring_5_58116 00:06:01.417 size: 0.125366 MiB name: RG_ring_2_58116 00:06:01.417 size: 0.015991 MiB name: RG_ring_3_58116 00:06:01.417 end memzones------- 00:06:01.417 03:56:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:01.417 heap id: 0 total size: 824.000000 MiB number of busy elements: 328 number of free elements: 18 00:06:01.417 list of free elements. size: 16.778198 MiB 00:06:01.417 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:01.417 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:01.417 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:01.417 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:01.417 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:01.417 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:01.417 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:01.417 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:01.417 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:01.417 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:01.417 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:01.417 element at address: 0x20001b400000 with size: 0.559509 MiB 00:06:01.417 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:01.417 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:01.417 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:01.417 element at address: 0x200012c00000 with size: 0.433228 MiB 00:06:01.417 element at address: 0x200028800000 with size: 0.390686 MiB 00:06:01.417 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:01.417 list of standard malloc elements. size: 199.290894 MiB 00:06:01.417 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:01.417 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:01.417 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:01.417 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:01.417 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:01.417 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:01.417 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:01.417 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:01.417 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:01.417 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:01.417 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:01.417 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:01.417 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:01.417 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:01.417 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:01.418 element at address: 0x200028864140 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886ae00 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:01.418 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:01.418 list of memzone associated elements. size: 607.930908 MiB 00:06:01.418 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:01.418 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:01.418 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:01.418 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:01.418 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:01.418 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58116_0 00:06:01.418 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:01.418 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58116_0 00:06:01.418 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:01.418 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58116_0 00:06:01.418 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:01.418 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:01.418 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:01.418 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:01.418 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:01.418 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58116_0 00:06:01.418 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:01.418 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58116 00:06:01.418 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:01.419 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58116 00:06:01.419 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:01.419 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:01.419 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:01.419 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:01.419 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:01.419 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:01.419 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:01.419 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:01.419 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:01.419 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58116 00:06:01.419 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:01.419 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58116 00:06:01.419 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:01.419 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58116 00:06:01.419 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:01.419 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58116 00:06:01.419 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:01.419 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58116 00:06:01.419 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:01.419 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58116 00:06:01.419 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:01.419 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:01.419 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:01.419 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:01.419 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:01.419 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:01.419 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:01.419 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58116 00:06:01.419 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:01.419 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58116 00:06:01.419 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:01.419 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:01.419 element at address: 0x200028864240 with size: 0.023804 MiB 00:06:01.419 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:01.419 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:01.419 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58116 00:06:01.419 element at address: 0x20002886a3c0 with size: 0.002502 MiB 00:06:01.419 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:01.419 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:01.419 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58116 00:06:01.419 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:01.419 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58116 00:06:01.419 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:01.419 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58116 00:06:01.419 element at address: 0x20002886af00 with size: 0.000366 MiB 00:06:01.419 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:01.419 03:56:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:01.419 03:56:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58116 00:06:01.419 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58116 ']' 00:06:01.419 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58116 00:06:01.419 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:01.419 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.419 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58116 00:06:01.419 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.419 killing process with pid 58116 00:06:01.419 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.419 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58116' 00:06:01.419 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58116 00:06:01.419 03:56:48 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58116 00:06:03.378 00:06:03.378 real 0m2.726s 00:06:03.378 user 0m2.740s 00:06:03.378 sys 0m0.393s 00:06:03.378 03:56:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.378 03:56:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.378 ************************************ 00:06:03.378 END TEST dpdk_mem_utility 00:06:03.378 ************************************ 00:06:03.378 03:56:50 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.378 03:56:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.378 03:56:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.378 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:06:03.378 ************************************ 00:06:03.378 START TEST event 00:06:03.378 ************************************ 00:06:03.378 03:56:50 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.378 * Looking for test storage... 00:06:03.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:03.378 03:56:50 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.378 03:56:50 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.378 03:56:50 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.378 03:56:50 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.378 03:56:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.378 03:56:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.378 03:56:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.378 03:56:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.378 03:56:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.378 03:56:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.378 03:56:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.378 03:56:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.378 03:56:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.378 03:56:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.378 03:56:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.378 03:56:50 event -- scripts/common.sh@344 -- # case "$op" in 00:06:03.378 03:56:50 event -- scripts/common.sh@345 -- # : 1 00:06:03.378 03:56:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.378 03:56:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.378 03:56:50 event -- scripts/common.sh@365 -- # decimal 1 00:06:03.378 03:56:50 event -- scripts/common.sh@353 -- # local d=1 00:06:03.378 03:56:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.378 03:56:50 event -- scripts/common.sh@355 -- # echo 1 00:06:03.378 03:56:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.378 03:56:50 event -- scripts/common.sh@366 -- # decimal 2 00:06:03.378 03:56:50 event -- scripts/common.sh@353 -- # local d=2 00:06:03.378 03:56:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.378 03:56:50 event -- scripts/common.sh@355 -- # echo 2 00:06:03.378 03:56:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.378 03:56:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.378 03:56:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.378 03:56:50 event -- scripts/common.sh@368 -- # return 0 00:06:03.378 03:56:50 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.378 03:56:50 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.378 --rc genhtml_branch_coverage=1 00:06:03.378 --rc genhtml_function_coverage=1 00:06:03.378 --rc genhtml_legend=1 00:06:03.378 --rc geninfo_all_blocks=1 00:06:03.378 --rc geninfo_unexecuted_blocks=1 00:06:03.378 00:06:03.378 ' 00:06:03.378 03:56:50 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.378 --rc genhtml_branch_coverage=1 00:06:03.378 --rc genhtml_function_coverage=1 00:06:03.378 --rc genhtml_legend=1 00:06:03.378 --rc geninfo_all_blocks=1 00:06:03.378 --rc geninfo_unexecuted_blocks=1 00:06:03.378 00:06:03.378 ' 00:06:03.378 03:56:50 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.378 --rc genhtml_branch_coverage=1 00:06:03.378 --rc genhtml_function_coverage=1 00:06:03.378 --rc genhtml_legend=1 00:06:03.378 --rc geninfo_all_blocks=1 00:06:03.378 --rc geninfo_unexecuted_blocks=1 00:06:03.378 00:06:03.378 ' 00:06:03.378 03:56:50 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.378 --rc genhtml_branch_coverage=1 00:06:03.378 --rc genhtml_function_coverage=1 00:06:03.378 --rc genhtml_legend=1 00:06:03.378 --rc geninfo_all_blocks=1 00:06:03.378 --rc geninfo_unexecuted_blocks=1 00:06:03.378 00:06:03.378 ' 00:06:03.378 03:56:50 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:03.378 03:56:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.378 03:56:50 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.378 03:56:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:03.378 03:56:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.378 03:56:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.378 ************************************ 00:06:03.378 START TEST event_perf 00:06:03.378 ************************************ 00:06:03.378 03:56:50 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.378 Running I/O for 1 seconds...[2024-12-06 03:56:50.640878] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:03.378 [2024-12-06 03:56:50.640968] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58208 ] 00:06:03.378 [2024-12-06 03:56:50.794682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.378 [2024-12-06 03:56:50.901646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.378 [2024-12-06 03:56:50.902160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.378 [2024-12-06 03:56:50.902399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.378 [2024-12-06 03:56:50.902416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.762 Running I/O for 1 seconds... 00:06:04.762 lcore 0: 197506 00:06:04.762 lcore 1: 197507 00:06:04.762 lcore 2: 197510 00:06:04.762 lcore 3: 197512 00:06:04.762 done. 00:06:04.762 00:06:04.762 real 0m1.455s 00:06:04.762 user 0m4.260s 00:06:04.762 sys 0m0.079s 00:06:04.762 03:56:52 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.762 ************************************ 00:06:04.762 END TEST event_perf 00:06:04.762 ************************************ 00:06:04.762 03:56:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.762 03:56:52 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:04.762 03:56:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:04.762 03:56:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.762 03:56:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.762 ************************************ 00:06:04.762 START TEST event_reactor 00:06:04.762 ************************************ 00:06:04.762 03:56:52 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:04.762 [2024-12-06 03:56:52.128356] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:04.762 [2024-12-06 03:56:52.128561] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58247 ] 00:06:04.762 [2024-12-06 03:56:52.285938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.022 [2024-12-06 03:56:52.379397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.405 test_start 00:06:06.405 oneshot 00:06:06.405 tick 100 00:06:06.405 tick 100 00:06:06.405 tick 250 00:06:06.405 tick 100 00:06:06.405 tick 100 00:06:06.405 tick 250 00:06:06.405 tick 100 00:06:06.405 tick 500 00:06:06.405 tick 100 00:06:06.405 tick 100 00:06:06.405 tick 250 00:06:06.405 tick 100 00:06:06.405 tick 100 00:06:06.405 test_end 00:06:06.405 00:06:06.405 real 0m1.425s 00:06:06.405 user 0m1.252s 00:06:06.405 sys 0m0.066s 00:06:06.405 03:56:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.405 03:56:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:06.405 ************************************ 00:06:06.405 END TEST event_reactor 00:06:06.405 ************************************ 00:06:06.405 03:56:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.405 03:56:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:06.405 03:56:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.405 03:56:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.405 ************************************ 00:06:06.405 START TEST event_reactor_perf 00:06:06.405 ************************************ 00:06:06.405 03:56:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.405 [2024-12-06 03:56:53.598125] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:06.405 [2024-12-06 03:56:53.598237] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58284 ] 00:06:06.405 [2024-12-06 03:56:53.761081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.405 [2024-12-06 03:56:53.864666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.789 test_start 00:06:07.789 test_end 00:06:07.789 Performance: 296196 events per second 00:06:07.789 00:06:07.789 real 0m1.449s 00:06:07.789 user 0m1.268s 00:06:07.789 sys 0m0.072s 00:06:07.789 03:56:55 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.789 ************************************ 00:06:07.789 END TEST event_reactor_perf 00:06:07.789 ************************************ 00:06:07.789 03:56:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.789 03:56:55 event -- event/event.sh@49 -- # uname -s 00:06:07.789 03:56:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.789 03:56:55 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:07.789 03:56:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.789 03:56:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.789 03:56:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.789 ************************************ 00:06:07.789 START TEST event_scheduler 00:06:07.789 ************************************ 00:06:07.789 03:56:55 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:07.789 * Looking for test storage... 00:06:07.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:07.789 03:56:55 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.789 03:56:55 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.789 03:56:55 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.789 03:56:55 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.789 03:56:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:07.790 03:56:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:07.790 03:56:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.790 03:56:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:07.790 03:56:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.790 03:56:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.790 03:56:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.790 03:56:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:07.790 03:56:55 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.790 03:56:55 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.790 --rc genhtml_branch_coverage=1 00:06:07.790 --rc genhtml_function_coverage=1 00:06:07.790 --rc genhtml_legend=1 00:06:07.790 --rc geninfo_all_blocks=1 00:06:07.790 --rc geninfo_unexecuted_blocks=1 00:06:07.790 00:06:07.790 ' 00:06:07.790 03:56:55 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.790 --rc genhtml_branch_coverage=1 00:06:07.790 --rc genhtml_function_coverage=1 00:06:07.790 --rc genhtml_legend=1 00:06:07.790 --rc geninfo_all_blocks=1 00:06:07.790 --rc geninfo_unexecuted_blocks=1 00:06:07.790 00:06:07.790 ' 00:06:07.790 03:56:55 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.790 --rc genhtml_branch_coverage=1 00:06:07.790 --rc genhtml_function_coverage=1 00:06:07.790 --rc genhtml_legend=1 00:06:07.790 --rc geninfo_all_blocks=1 00:06:07.790 --rc geninfo_unexecuted_blocks=1 00:06:07.790 00:06:07.790 ' 00:06:07.790 03:56:55 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.790 --rc genhtml_branch_coverage=1 00:06:07.790 --rc genhtml_function_coverage=1 00:06:07.790 --rc genhtml_legend=1 00:06:07.790 --rc geninfo_all_blocks=1 00:06:07.790 --rc geninfo_unexecuted_blocks=1 00:06:07.790 00:06:07.790 ' 00:06:07.790 03:56:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:07.790 03:56:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58354 00:06:07.790 03:56:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.790 03:56:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58354 00:06:07.790 03:56:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58354 ']' 00:06:07.790 03:56:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.790 03:56:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.790 03:56:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.790 03:56:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.790 03:56:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.790 03:56:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:07.790 [2024-12-06 03:56:55.252993] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:07.790 [2024-12-06 03:56:55.253125] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58354 ] 00:06:08.050 [2024-12-06 03:56:55.411957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.050 [2024-12-06 03:56:55.521812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.050 [2024-12-06 03:56:55.521939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.050 [2024-12-06 03:56:55.522411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.050 [2024-12-06 03:56:55.522418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.619 03:56:56 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.619 03:56:56 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:08.619 03:56:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:08.619 03:56:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.619 03:56:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.619 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.619 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.619 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.619 POWER: Cannot set governor of lcore 0 to performance 00:06:08.619 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.619 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.619 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.619 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.619 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:08.619 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:08.619 POWER: Unable to set Power Management Environment for lcore 0 00:06:08.619 [2024-12-06 03:56:56.103803] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:08.619 [2024-12-06 03:56:56.103829] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:08.619 [2024-12-06 03:56:56.103841] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:08.619 [2024-12-06 03:56:56.103861] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:08.619 [2024-12-06 03:56:56.103868] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:08.619 [2024-12-06 03:56:56.103877] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:08.619 03:56:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.619 03:56:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:08.619 03:56:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.619 03:56:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.879 [2024-12-06 03:56:56.365140] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:08.879 03:56:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.879 03:56:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:08.879 03:56:56 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.879 03:56:56 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.879 03:56:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.879 ************************************ 00:06:08.879 START TEST scheduler_create_thread 00:06:08.879 ************************************ 00:06:08.879 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:08.879 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:08.879 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.879 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.879 2 00:06:08.879 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.879 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:08.879 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.879 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.879 3 00:06:08.879 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.879 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:08.879 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.879 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.140 4 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.140 5 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.140 6 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.140 7 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.140 8 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.140 9 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.140 10 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.140 03:56:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.082 03:56:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.082 00:06:10.082 real 0m1.175s 00:06:10.082 user 0m0.020s 00:06:10.082 sys 0m0.002s 00:06:10.082 ************************************ 00:06:10.082 03:56:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.082 03:56:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.082 END TEST scheduler_create_thread 00:06:10.082 ************************************ 00:06:10.082 03:56:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:10.082 03:56:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58354 00:06:10.082 03:56:57 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58354 ']' 00:06:10.082 03:56:57 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58354 00:06:10.082 03:56:57 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:10.344 03:56:57 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.344 03:56:57 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58354 00:06:10.344 03:56:57 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:10.344 03:56:57 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:10.345 killing process with pid 58354 00:06:10.345 03:56:57 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58354' 00:06:10.345 03:56:57 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58354 00:06:10.345 03:56:57 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58354 00:06:10.606 [2024-12-06 03:56:58.035705] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:11.550 00:06:11.550 real 0m3.808s 00:06:11.550 user 0m6.212s 00:06:11.550 sys 0m0.359s 00:06:11.550 03:56:58 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.550 ************************************ 00:06:11.550 END TEST event_scheduler 00:06:11.550 ************************************ 00:06:11.550 03:56:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.550 03:56:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:11.550 03:56:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:11.550 03:56:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.550 03:56:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.550 03:56:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.550 ************************************ 00:06:11.550 START TEST app_repeat 00:06:11.550 ************************************ 00:06:11.550 03:56:58 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58444 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.550 Process app_repeat pid: 58444 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58444' 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.550 spdk_app_start Round 0 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:11.550 03:56:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58444 /var/tmp/spdk-nbd.sock 00:06:11.550 03:56:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58444 ']' 00:06:11.550 03:56:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.550 03:56:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.550 03:56:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.550 03:56:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.550 03:56:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.550 [2024-12-06 03:56:58.989843] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:11.550 [2024-12-06 03:56:58.989996] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58444 ] 00:06:11.811 [2024-12-06 03:56:59.156795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.811 [2024-12-06 03:56:59.294130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.811 [2024-12-06 03:56:59.294352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.408 03:56:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.408 03:56:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:12.408 03:56:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.668 Malloc0 00:06:12.668 03:57:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.929 Malloc1 00:06:12.929 03:57:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.929 03:57:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.191 /dev/nbd0 00:06:13.191 03:57:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.191 03:57:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.191 1+0 records in 00:06:13.191 1+0 records out 00:06:13.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347425 s, 11.8 MB/s 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.191 03:57:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:13.191 03:57:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.191 03:57:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.191 03:57:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.452 /dev/nbd1 00:06:13.452 03:57:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.452 03:57:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.452 03:57:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:13.452 03:57:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:13.452 03:57:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.452 03:57:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.452 03:57:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:13.452 03:57:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:13.452 03:57:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.452 03:57:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.452 03:57:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.452 1+0 records in 00:06:13.452 1+0 records out 00:06:13.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379232 s, 10.8 MB/s 00:06:13.452 03:57:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.711 03:57:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:13.711 03:57:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.711 03:57:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.711 03:57:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:13.711 03:57:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.711 03:57:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.711 03:57:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.711 03:57:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.711 03:57:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.711 03:57:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.711 { 00:06:13.711 "nbd_device": "/dev/nbd0", 00:06:13.711 "bdev_name": "Malloc0" 00:06:13.711 }, 00:06:13.711 { 00:06:13.711 "nbd_device": "/dev/nbd1", 00:06:13.711 "bdev_name": "Malloc1" 00:06:13.711 } 00:06:13.711 ]' 00:06:13.711 03:57:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.711 { 00:06:13.711 "nbd_device": "/dev/nbd0", 00:06:13.711 "bdev_name": "Malloc0" 00:06:13.711 }, 00:06:13.711 { 00:06:13.711 "nbd_device": "/dev/nbd1", 00:06:13.711 "bdev_name": "Malloc1" 00:06:13.711 } 00:06:13.711 ]' 00:06:13.711 03:57:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.969 /dev/nbd1' 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.969 /dev/nbd1' 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.969 256+0 records in 00:06:13.969 256+0 records out 00:06:13.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622272 s, 169 MB/s 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.969 03:57:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.969 256+0 records in 00:06:13.970 256+0 records out 00:06:13.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258219 s, 40.6 MB/s 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.970 256+0 records in 00:06:13.970 256+0 records out 00:06:13.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233726 s, 44.9 MB/s 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.970 03:57:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.230 03:57:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.230 03:57:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.230 03:57:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.230 03:57:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.230 03:57:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.230 03:57:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.230 03:57:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.230 03:57:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.230 03:57:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.230 03:57:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.490 03:57:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.490 03:57:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.490 03:57:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.490 03:57:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.490 03:57:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.490 03:57:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.490 03:57:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.490 03:57:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.490 03:57:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.490 03:57:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.490 03:57:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.749 03:57:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.749 03:57:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.749 03:57:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.749 03:57:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.749 03:57:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.749 03:57:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.749 03:57:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.749 03:57:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.749 03:57:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.749 03:57:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.749 03:57:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.749 03:57:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.749 03:57:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.008 03:57:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.947 [2024-12-06 03:57:03.169661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.947 [2024-12-06 03:57:03.272651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.947 [2024-12-06 03:57:03.272787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.947 [2024-12-06 03:57:03.398345] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.947 [2024-12-06 03:57:03.398432] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.493 03:57:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.493 spdk_app_start Round 1 00:06:18.493 03:57:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:18.493 03:57:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58444 /var/tmp/spdk-nbd.sock 00:06:18.493 03:57:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58444 ']' 00:06:18.493 03:57:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.493 03:57:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.493 03:57:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.493 03:57:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.493 03:57:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.493 03:57:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.493 03:57:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:18.493 03:57:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.493 Malloc0 00:06:18.493 03:57:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.752 Malloc1 00:06:18.752 03:57:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.752 03:57:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.010 /dev/nbd0 00:06:19.010 03:57:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.010 03:57:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.010 1+0 records in 00:06:19.010 1+0 records out 00:06:19.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296379 s, 13.8 MB/s 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.010 03:57:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:19.010 03:57:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.010 03:57:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.010 03:57:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.268 /dev/nbd1 00:06:19.268 03:57:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.268 03:57:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.268 1+0 records in 00:06:19.268 1+0 records out 00:06:19.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217507 s, 18.8 MB/s 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.268 03:57:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:19.268 03:57:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.268 03:57:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.268 03:57:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.268 03:57:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.268 03:57:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.526 { 00:06:19.526 "nbd_device": "/dev/nbd0", 00:06:19.526 "bdev_name": "Malloc0" 00:06:19.526 }, 00:06:19.526 { 00:06:19.526 "nbd_device": "/dev/nbd1", 00:06:19.526 "bdev_name": "Malloc1" 00:06:19.526 } 00:06:19.526 ]' 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.526 { 00:06:19.526 "nbd_device": "/dev/nbd0", 00:06:19.526 "bdev_name": "Malloc0" 00:06:19.526 }, 00:06:19.526 { 00:06:19.526 "nbd_device": "/dev/nbd1", 00:06:19.526 "bdev_name": "Malloc1" 00:06:19.526 } 00:06:19.526 ]' 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.526 /dev/nbd1' 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.526 /dev/nbd1' 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.526 256+0 records in 00:06:19.526 256+0 records out 00:06:19.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00732601 s, 143 MB/s 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.526 256+0 records in 00:06:19.526 256+0 records out 00:06:19.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214034 s, 49.0 MB/s 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.526 256+0 records in 00:06:19.526 256+0 records out 00:06:19.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229577 s, 45.7 MB/s 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.526 03:57:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.786 03:57:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.786 03:57:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.786 03:57:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.786 03:57:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.786 03:57:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.786 03:57:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.786 03:57:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.786 03:57:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.786 03:57:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.786 03:57:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.046 03:57:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.046 03:57:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.046 03:57:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.046 03:57:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.046 03:57:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.046 03:57:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.046 03:57:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.046 03:57:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.046 03:57:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.046 03:57:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.046 03:57:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.303 03:57:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.303 03:57:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.303 03:57:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.303 03:57:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.303 03:57:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.303 03:57:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.303 03:57:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.303 03:57:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.303 03:57:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.303 03:57:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.303 03:57:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.303 03:57:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.303 03:57:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.637 03:57:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.217 [2024-12-06 03:57:08.605631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.217 [2024-12-06 03:57:08.685294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.217 [2024-12-06 03:57:08.685316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.476 [2024-12-06 03:57:08.791456] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.476 [2024-12-06 03:57:08.791512] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.013 spdk_app_start Round 2 00:06:24.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.013 03:57:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.013 03:57:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:24.013 03:57:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58444 /var/tmp/spdk-nbd.sock 00:06:24.013 03:57:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58444 ']' 00:06:24.013 03:57:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.013 03:57:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.013 03:57:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.013 03:57:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.013 03:57:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.013 03:57:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.013 03:57:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:24.014 03:57:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.014 Malloc0 00:06:24.014 03:57:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.275 Malloc1 00:06:24.275 03:57:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.275 03:57:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.535 /dev/nbd0 00:06:24.535 03:57:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.535 03:57:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.535 1+0 records in 00:06:24.535 1+0 records out 00:06:24.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182132 s, 22.5 MB/s 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.535 03:57:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.535 03:57:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.535 03:57:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.535 03:57:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.795 /dev/nbd1 00:06:24.795 03:57:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.795 03:57:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.795 1+0 records in 00:06:24.795 1+0 records out 00:06:24.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231155 s, 17.7 MB/s 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.795 03:57:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.795 03:57:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.795 03:57:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.795 03:57:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.795 03:57:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.795 03:57:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.057 { 00:06:25.057 "nbd_device": "/dev/nbd0", 00:06:25.057 "bdev_name": "Malloc0" 00:06:25.057 }, 00:06:25.057 { 00:06:25.057 "nbd_device": "/dev/nbd1", 00:06:25.057 "bdev_name": "Malloc1" 00:06:25.057 } 00:06:25.057 ]' 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.057 { 00:06:25.057 "nbd_device": "/dev/nbd0", 00:06:25.057 "bdev_name": "Malloc0" 00:06:25.057 }, 00:06:25.057 { 00:06:25.057 "nbd_device": "/dev/nbd1", 00:06:25.057 "bdev_name": "Malloc1" 00:06:25.057 } 00:06:25.057 ]' 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.057 /dev/nbd1' 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.057 /dev/nbd1' 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.057 256+0 records in 00:06:25.057 256+0 records out 00:06:25.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00705865 s, 149 MB/s 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.057 256+0 records in 00:06:25.057 256+0 records out 00:06:25.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149912 s, 69.9 MB/s 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.057 256+0 records in 00:06:25.057 256+0 records out 00:06:25.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166049 s, 63.1 MB/s 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.057 03:57:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.317 03:57:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.577 03:57:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.577 03:57:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.577 03:57:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.577 03:57:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.577 03:57:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.577 03:57:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.577 03:57:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.577 03:57:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.577 03:57:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.577 03:57:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.577 03:57:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.577 03:57:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.577 03:57:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.837 03:57:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.404 [2024-12-06 03:57:13.922437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.666 [2024-12-06 03:57:14.001439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.666 [2024-12-06 03:57:14.001562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.666 [2024-12-06 03:57:14.100119] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.666 [2024-12-06 03:57:14.100188] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.218 03:57:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58444 /var/tmp/spdk-nbd.sock 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58444 ']' 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:29.218 03:57:16 event.app_repeat -- event/event.sh@39 -- # killprocess 58444 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58444 ']' 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58444 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58444 00:06:29.218 killing process with pid 58444 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58444' 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58444 00:06:29.218 03:57:16 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58444 00:06:29.817 spdk_app_start is called in Round 0. 00:06:29.817 Shutdown signal received, stop current app iteration 00:06:29.817 Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 reinitialization... 00:06:29.817 spdk_app_start is called in Round 1. 00:06:29.817 Shutdown signal received, stop current app iteration 00:06:29.817 Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 reinitialization... 00:06:29.817 spdk_app_start is called in Round 2. 00:06:29.817 Shutdown signal received, stop current app iteration 00:06:29.817 Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 reinitialization... 00:06:29.817 spdk_app_start is called in Round 3. 00:06:29.817 Shutdown signal received, stop current app iteration 00:06:29.817 ************************************ 00:06:29.817 END TEST app_repeat 00:06:29.817 ************************************ 00:06:29.817 03:57:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:29.817 03:57:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:29.817 00:06:29.817 real 0m18.151s 00:06:29.817 user 0m39.656s 00:06:29.817 sys 0m2.252s 00:06:29.817 03:57:17 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.817 03:57:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.817 03:57:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:29.817 03:57:17 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:29.817 03:57:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.817 03:57:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.817 03:57:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.817 ************************************ 00:06:29.817 START TEST cpu_locks 00:06:29.817 ************************************ 00:06:29.817 03:57:17 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:29.817 * Looking for test storage... 00:06:29.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:29.817 03:57:17 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.817 03:57:17 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.817 03:57:17 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.817 03:57:17 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.817 03:57:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:29.818 03:57:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.818 03:57:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.818 03:57:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.818 03:57:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:29.818 03:57:17 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.818 03:57:17 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.818 --rc genhtml_branch_coverage=1 00:06:29.818 --rc genhtml_function_coverage=1 00:06:29.818 --rc genhtml_legend=1 00:06:29.818 --rc geninfo_all_blocks=1 00:06:29.818 --rc geninfo_unexecuted_blocks=1 00:06:29.818 00:06:29.818 ' 00:06:29.818 03:57:17 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.818 --rc genhtml_branch_coverage=1 00:06:29.818 --rc genhtml_function_coverage=1 00:06:29.818 --rc genhtml_legend=1 00:06:29.818 --rc geninfo_all_blocks=1 00:06:29.818 --rc geninfo_unexecuted_blocks=1 00:06:29.818 00:06:29.818 ' 00:06:29.818 03:57:17 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.818 --rc genhtml_branch_coverage=1 00:06:29.818 --rc genhtml_function_coverage=1 00:06:29.818 --rc genhtml_legend=1 00:06:29.818 --rc geninfo_all_blocks=1 00:06:29.818 --rc geninfo_unexecuted_blocks=1 00:06:29.818 00:06:29.818 ' 00:06:29.818 03:57:17 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.818 --rc genhtml_branch_coverage=1 00:06:29.818 --rc genhtml_function_coverage=1 00:06:29.818 --rc genhtml_legend=1 00:06:29.818 --rc geninfo_all_blocks=1 00:06:29.818 --rc geninfo_unexecuted_blocks=1 00:06:29.818 00:06:29.818 ' 00:06:29.818 03:57:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:29.818 03:57:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:29.818 03:57:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:29.818 03:57:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:29.818 03:57:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.818 03:57:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.818 03:57:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.818 ************************************ 00:06:29.818 START TEST default_locks 00:06:29.818 ************************************ 00:06:29.818 03:57:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:29.818 03:57:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58880 00:06:29.818 03:57:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58880 00:06:29.818 03:57:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58880 ']' 00:06:29.818 03:57:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.818 03:57:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.818 03:57:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.818 03:57:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.818 03:57:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.818 03:57:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.079 [2024-12-06 03:57:17.361282] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:30.079 [2024-12-06 03:57:17.361471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58880 ] 00:06:30.079 [2024-12-06 03:57:17.540552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.340 [2024-12-06 03:57:17.643663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.911 03:57:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.911 03:57:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:30.911 03:57:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58880 00:06:30.911 03:57:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58880 00:06:30.911 03:57:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.172 03:57:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58880 00:06:31.172 03:57:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58880 ']' 00:06:31.172 03:57:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58880 00:06:31.172 03:57:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:31.172 03:57:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.172 03:57:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58880 00:06:31.172 03:57:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.172 03:57:18 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.172 killing process with pid 58880 00:06:31.172 03:57:18 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58880' 00:06:31.172 03:57:18 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58880 00:06:31.172 03:57:18 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58880 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58880 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58880 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58880 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58880 ']' 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.559 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58880) - No such process 00:06:32.559 ERROR: process (pid: 58880) is no longer running 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.559 00:06:32.559 real 0m2.768s 00:06:32.559 user 0m2.751s 00:06:32.559 sys 0m0.482s 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.559 03:57:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.559 ************************************ 00:06:32.559 END TEST default_locks 00:06:32.559 ************************************ 00:06:32.559 03:57:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:32.559 03:57:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.559 03:57:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.559 03:57:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.559 ************************************ 00:06:32.559 START TEST default_locks_via_rpc 00:06:32.559 ************************************ 00:06:32.559 03:57:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:32.559 03:57:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58944 00:06:32.559 03:57:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58944 00:06:32.559 03:57:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58944 ']' 00:06:32.559 03:57:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.559 03:57:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.559 03:57:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.559 03:57:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.559 03:57:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.559 03:57:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.885 [2024-12-06 03:57:20.141022] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:32.885 [2024-12-06 03:57:20.141148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58944 ] 00:06:32.885 [2024-12-06 03:57:20.332816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.146 [2024-12-06 03:57:20.465875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58944 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58944 00:06:33.718 03:57:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.978 03:57:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58944 00:06:33.978 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58944 ']' 00:06:33.978 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58944 00:06:33.978 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:33.978 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.978 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58944 00:06:33.978 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.978 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.978 killing process with pid 58944 00:06:33.978 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58944' 00:06:33.978 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58944 00:06:33.978 03:57:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58944 00:06:35.362 ************************************ 00:06:35.362 END TEST default_locks_via_rpc 00:06:35.362 ************************************ 00:06:35.362 00:06:35.362 real 0m2.708s 00:06:35.362 user 0m2.711s 00:06:35.362 sys 0m0.484s 00:06:35.362 03:57:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.362 03:57:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.362 03:57:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:35.362 03:57:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.362 03:57:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.362 03:57:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.362 ************************************ 00:06:35.362 START TEST non_locking_app_on_locked_coremask 00:06:35.362 ************************************ 00:06:35.362 03:57:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:35.362 03:57:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58996 00:06:35.362 03:57:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58996 /var/tmp/spdk.sock 00:06:35.362 03:57:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58996 ']' 00:06:35.362 03:57:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.362 03:57:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.362 03:57:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.362 03:57:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.362 03:57:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.362 03:57:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.623 [2024-12-06 03:57:22.892498] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:35.623 [2024-12-06 03:57:22.892651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58996 ] 00:06:35.623 [2024-12-06 03:57:23.047250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.886 [2024-12-06 03:57:23.154732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.457 03:57:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.457 03:57:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.457 03:57:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59012 00:06:36.457 03:57:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59012 /var/tmp/spdk2.sock 00:06:36.457 03:57:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59012 ']' 00:06:36.457 03:57:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.457 03:57:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.457 03:57:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.457 03:57:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.457 03:57:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.457 03:57:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:36.457 [2024-12-06 03:57:23.837968] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:36.457 [2024-12-06 03:57:23.838071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59012 ] 00:06:36.715 [2024-12-06 03:57:24.005361] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.715 [2024-12-06 03:57:24.005437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.715 [2024-12-06 03:57:24.210801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.096 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.096 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.096 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58996 00:06:38.096 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.096 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58996 00:06:38.352 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58996 00:06:38.352 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58996 ']' 00:06:38.352 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58996 00:06:38.352 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.352 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.352 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58996 00:06:38.352 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.352 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.352 killing process with pid 58996 00:06:38.352 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58996' 00:06:38.352 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58996 00:06:38.352 03:57:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58996 00:06:41.630 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59012 00:06:41.630 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59012 ']' 00:06:41.630 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59012 00:06:41.630 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.630 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.630 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59012 00:06:41.630 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.630 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.630 killing process with pid 59012 00:06:41.630 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59012' 00:06:41.630 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59012 00:06:41.630 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59012 00:06:42.563 00:06:42.563 real 0m7.142s 00:06:42.563 user 0m7.302s 00:06:42.563 sys 0m0.879s 00:06:42.563 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.563 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.563 ************************************ 00:06:42.563 END TEST non_locking_app_on_locked_coremask 00:06:42.563 ************************************ 00:06:42.563 03:57:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:42.563 03:57:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.563 03:57:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.563 03:57:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.563 ************************************ 00:06:42.563 START TEST locking_app_on_unlocked_coremask 00:06:42.563 ************************************ 00:06:42.563 03:57:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:42.563 03:57:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59120 00:06:42.563 03:57:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59120 /var/tmp/spdk.sock 00:06:42.563 03:57:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59120 ']' 00:06:42.563 03:57:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.563 03:57:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.563 03:57:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:42.563 03:57:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.563 03:57:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.563 03:57:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.563 [2024-12-06 03:57:30.068500] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:42.563 [2024-12-06 03:57:30.068625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59120 ] 00:06:42.821 [2024-12-06 03:57:30.229103] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.821 [2024-12-06 03:57:30.229152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.821 [2024-12-06 03:57:30.332609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.756 03:57:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.756 03:57:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:43.756 03:57:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59130 00:06:43.756 03:57:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59130 /var/tmp/spdk2.sock 00:06:43.757 03:57:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59130 ']' 00:06:43.757 03:57:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.757 03:57:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.757 03:57:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.757 03:57:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.757 03:57:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.757 03:57:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.757 [2024-12-06 03:57:31.010227] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:43.757 [2024-12-06 03:57:31.010343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:06:43.757 [2024-12-06 03:57:31.182137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.015 [2024-12-06 03:57:31.389751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59130 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59130 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59120 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59120 ']' 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59120 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59120 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.389 killing process with pid 59120 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59120' 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59120 00:06:45.389 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59120 00:06:48.748 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59130 00:06:48.748 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59130 ']' 00:06:48.748 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59130 00:06:48.748 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:48.748 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.748 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59130 00:06:48.748 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.748 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.748 killing process with pid 59130 00:06:48.748 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59130' 00:06:48.748 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59130 00:06:48.748 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59130 00:06:49.682 00:06:49.682 real 0m7.157s 00:06:49.682 user 0m7.388s 00:06:49.682 sys 0m0.849s 00:06:49.682 03:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.682 ************************************ 00:06:49.682 END TEST locking_app_on_unlocked_coremask 00:06:49.682 ************************************ 00:06:49.682 03:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.682 03:57:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:49.682 03:57:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.682 03:57:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.682 03:57:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.682 ************************************ 00:06:49.682 START TEST locking_app_on_locked_coremask 00:06:49.682 ************************************ 00:06:49.682 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:49.682 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59238 00:06:49.682 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.682 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59238 /var/tmp/spdk.sock 00:06:49.682 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59238 ']' 00:06:49.682 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.682 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.682 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.682 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.682 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.941 [2024-12-06 03:57:37.263940] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:49.941 [2024-12-06 03:57:37.264062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59238 ] 00:06:49.941 [2024-12-06 03:57:37.419608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.199 [2024-12-06 03:57:37.500388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59253 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59253 /var/tmp/spdk2.sock 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59253 /var/tmp/spdk2.sock 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59253 /var/tmp/spdk2.sock 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59253 ']' 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.766 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.766 [2024-12-06 03:57:38.167678] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:50.766 [2024-12-06 03:57:38.167807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59253 ] 00:06:51.025 [2024-12-06 03:57:38.332780] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59238 has claimed it. 00:06:51.025 [2024-12-06 03:57:38.332831] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:51.283 ERROR: process (pid: 59253) is no longer running 00:06:51.283 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59253) - No such process 00:06:51.283 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.283 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:51.283 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:51.283 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.283 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.283 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.283 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59238 00:06:51.283 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59238 00:06:51.283 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.542 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59238 00:06:51.542 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59238 ']' 00:06:51.542 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59238 00:06:51.542 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:51.542 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.542 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59238 00:06:51.542 03:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.542 03:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.542 killing process with pid 59238 00:06:51.542 03:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59238' 00:06:51.542 03:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59238 00:06:51.542 03:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59238 00:06:52.917 00:06:52.917 real 0m3.053s 00:06:52.917 user 0m3.287s 00:06:52.917 sys 0m0.517s 00:06:52.917 03:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.917 ************************************ 00:06:52.917 END TEST locking_app_on_locked_coremask 00:06:52.917 ************************************ 00:06:52.918 03:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.918 03:57:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:52.918 03:57:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.918 03:57:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.918 03:57:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.918 ************************************ 00:06:52.918 START TEST locking_overlapped_coremask 00:06:52.918 ************************************ 00:06:52.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.918 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:52.918 03:57:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59307 00:06:52.918 03:57:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59307 /var/tmp/spdk.sock 00:06:52.918 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59307 ']' 00:06:52.918 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.918 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.918 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.918 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.918 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.918 03:57:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:52.918 [2024-12-06 03:57:40.353505] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:52.918 [2024-12-06 03:57:40.353803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59307 ] 00:06:53.176 [2024-12-06 03:57:40.504662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.176 [2024-12-06 03:57:40.590570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.177 [2024-12-06 03:57:40.590779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.177 [2024-12-06 03:57:40.590869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59325 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59325 /var/tmp/spdk2.sock 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59325 /var/tmp/spdk2.sock 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:53.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59325 /var/tmp/spdk2.sock 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59325 ']' 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.744 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.002 [2024-12-06 03:57:41.286961] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:54.002 [2024-12-06 03:57:41.287346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59325 ] 00:06:54.002 [2024-12-06 03:57:41.474894] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59307 has claimed it. 00:06:54.002 [2024-12-06 03:57:41.474956] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.568 ERROR: process (pid: 59325) is no longer running 00:06:54.568 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59325) - No such process 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59307 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59307 ']' 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59307 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59307 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59307' 00:06:54.568 killing process with pid 59307 00:06:54.568 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59307 00:06:54.569 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59307 00:06:55.944 00:06:55.944 real 0m2.866s 00:06:55.944 user 0m7.833s 00:06:55.944 sys 0m0.438s 00:06:55.944 03:57:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.944 03:57:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.944 ************************************ 00:06:55.944 END TEST locking_overlapped_coremask 00:06:55.944 ************************************ 00:06:55.944 03:57:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:55.944 03:57:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.944 03:57:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.944 03:57:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.944 ************************************ 00:06:55.944 START TEST locking_overlapped_coremask_via_rpc 00:06:55.944 ************************************ 00:06:55.944 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:55.944 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59377 00:06:55.944 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59377 /var/tmp/spdk.sock 00:06:55.944 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.944 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59377 ']' 00:06:55.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.944 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.944 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.944 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.944 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.944 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.944 [2024-12-06 03:57:43.258202] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:55.944 [2024-12-06 03:57:43.258325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59377 ] 00:06:55.944 [2024-12-06 03:57:43.415922] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.944 [2024-12-06 03:57:43.415961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.203 [2024-12-06 03:57:43.498677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.203 [2024-12-06 03:57:43.498980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.203 [2024-12-06 03:57:43.499000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.767 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.767 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.767 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59390 00:06:56.767 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:56.767 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59390 /var/tmp/spdk2.sock 00:06:56.767 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59390 ']' 00:06:56.767 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.767 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.767 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.767 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.767 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.767 [2024-12-06 03:57:44.157390] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:06:56.767 [2024-12-06 03:57:44.157665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59390 ] 00:06:57.025 [2024-12-06 03:57:44.329633] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.025 [2024-12-06 03:57:44.329684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.025 [2024-12-06 03:57:44.536596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.025 [2024-12-06 03:57:44.539807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.025 [2024-12-06 03:57:44.539838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.397 [2024-12-06 03:57:45.710832] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59377 has claimed it. 00:06:58.397 request: 00:06:58.397 { 00:06:58.397 "method": "framework_enable_cpumask_locks", 00:06:58.397 "req_id": 1 00:06:58.397 } 00:06:58.397 Got JSON-RPC error response 00:06:58.397 response: 00:06:58.397 { 00:06:58.397 "code": -32603, 00:06:58.397 "message": "Failed to claim CPU core: 2" 00:06:58.397 } 00:06:58.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59377 /var/tmp/spdk.sock 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59377 ']' 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.397 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.655 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.655 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:58.655 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59390 /var/tmp/spdk2.sock 00:06:58.655 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59390 ']' 00:06:58.655 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.655 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.655 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.655 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.655 03:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.912 ************************************ 00:06:58.912 END TEST locking_overlapped_coremask_via_rpc 00:06:58.912 ************************************ 00:06:58.912 03:57:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.912 03:57:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:58.912 03:57:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:58.912 03:57:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:58.912 03:57:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:58.912 03:57:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:58.912 00:06:58.912 real 0m3.003s 00:06:58.912 user 0m1.099s 00:06:58.912 sys 0m0.132s 00:06:58.912 03:57:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.912 03:57:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.912 03:57:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:58.912 03:57:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59377 ]] 00:06:58.912 03:57:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59377 00:06:58.912 03:57:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59377 ']' 00:06:58.912 03:57:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59377 00:06:58.912 03:57:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:58.912 03:57:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.912 03:57:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59377 00:06:58.912 killing process with pid 59377 00:06:58.912 03:57:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.912 03:57:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.912 03:57:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59377' 00:06:58.912 03:57:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59377 00:06:58.912 03:57:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59377 00:07:00.321 03:57:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59390 ]] 00:07:00.321 03:57:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59390 00:07:00.321 03:57:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59390 ']' 00:07:00.321 03:57:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59390 00:07:00.321 03:57:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:00.321 03:57:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.321 03:57:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59390 00:07:00.321 killing process with pid 59390 00:07:00.321 03:57:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:00.321 03:57:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:00.321 03:57:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59390' 00:07:00.321 03:57:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59390 00:07:00.321 03:57:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59390 00:07:01.254 03:57:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:01.254 03:57:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:01.254 03:57:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59377 ]] 00:07:01.254 03:57:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59377 00:07:01.254 03:57:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59377 ']' 00:07:01.254 03:57:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59377 00:07:01.254 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59377) - No such process 00:07:01.254 Process with pid 59377 is not found 00:07:01.254 03:57:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59377 is not found' 00:07:01.254 03:57:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59390 ]] 00:07:01.254 03:57:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59390 00:07:01.254 03:57:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59390 ']' 00:07:01.254 03:57:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59390 00:07:01.254 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59390) - No such process 00:07:01.254 Process with pid 59390 is not found 00:07:01.254 03:57:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59390 is not found' 00:07:01.254 03:57:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:01.254 ************************************ 00:07:01.254 END TEST cpu_locks 00:07:01.254 ************************************ 00:07:01.254 00:07:01.254 real 0m31.609s 00:07:01.254 user 0m53.285s 00:07:01.254 sys 0m4.559s 00:07:01.254 03:57:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.254 03:57:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.254 ************************************ 00:07:01.254 END TEST event 00:07:01.254 ************************************ 00:07:01.254 00:07:01.254 real 0m58.286s 00:07:01.254 user 1m46.083s 00:07:01.254 sys 0m7.598s 00:07:01.254 03:57:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.254 03:57:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.512 03:57:48 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:01.512 03:57:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.512 03:57:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.512 03:57:48 -- common/autotest_common.sh@10 -- # set +x 00:07:01.512 ************************************ 00:07:01.512 START TEST thread 00:07:01.512 ************************************ 00:07:01.512 03:57:48 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:01.512 * Looking for test storage... 00:07:01.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:01.512 03:57:48 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.512 03:57:48 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.512 03:57:48 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.512 03:57:48 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.512 03:57:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.512 03:57:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.512 03:57:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.512 03:57:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.512 03:57:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.512 03:57:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.512 03:57:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.512 03:57:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.512 03:57:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.512 03:57:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.512 03:57:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.512 03:57:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:01.512 03:57:48 thread -- scripts/common.sh@345 -- # : 1 00:07:01.512 03:57:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.512 03:57:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.512 03:57:48 thread -- scripts/common.sh@365 -- # decimal 1 00:07:01.512 03:57:48 thread -- scripts/common.sh@353 -- # local d=1 00:07:01.512 03:57:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.512 03:57:48 thread -- scripts/common.sh@355 -- # echo 1 00:07:01.512 03:57:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.512 03:57:48 thread -- scripts/common.sh@366 -- # decimal 2 00:07:01.512 03:57:48 thread -- scripts/common.sh@353 -- # local d=2 00:07:01.512 03:57:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.512 03:57:48 thread -- scripts/common.sh@355 -- # echo 2 00:07:01.512 03:57:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.512 03:57:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.512 03:57:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.512 03:57:48 thread -- scripts/common.sh@368 -- # return 0 00:07:01.512 03:57:48 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.512 03:57:48 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.512 --rc genhtml_branch_coverage=1 00:07:01.512 --rc genhtml_function_coverage=1 00:07:01.512 --rc genhtml_legend=1 00:07:01.512 --rc geninfo_all_blocks=1 00:07:01.512 --rc geninfo_unexecuted_blocks=1 00:07:01.512 00:07:01.512 ' 00:07:01.513 03:57:48 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.513 --rc genhtml_branch_coverage=1 00:07:01.513 --rc genhtml_function_coverage=1 00:07:01.513 --rc genhtml_legend=1 00:07:01.513 --rc geninfo_all_blocks=1 00:07:01.513 --rc geninfo_unexecuted_blocks=1 00:07:01.513 00:07:01.513 ' 00:07:01.513 03:57:48 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.513 --rc genhtml_branch_coverage=1 00:07:01.513 --rc genhtml_function_coverage=1 00:07:01.513 --rc genhtml_legend=1 00:07:01.513 --rc geninfo_all_blocks=1 00:07:01.513 --rc geninfo_unexecuted_blocks=1 00:07:01.513 00:07:01.513 ' 00:07:01.513 03:57:48 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.513 --rc genhtml_branch_coverage=1 00:07:01.513 --rc genhtml_function_coverage=1 00:07:01.513 --rc genhtml_legend=1 00:07:01.513 --rc geninfo_all_blocks=1 00:07:01.513 --rc geninfo_unexecuted_blocks=1 00:07:01.513 00:07:01.513 ' 00:07:01.513 03:57:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:01.513 03:57:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:01.513 03:57:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.513 03:57:48 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.513 ************************************ 00:07:01.513 START TEST thread_poller_perf 00:07:01.513 ************************************ 00:07:01.513 03:57:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:01.513 [2024-12-06 03:57:48.969168] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:01.513 [2024-12-06 03:57:48.969379] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59550 ] 00:07:01.771 [2024-12-06 03:57:49.124914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.771 [2024-12-06 03:57:49.209900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.771 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:03.146 [2024-12-06T03:57:50.673Z] ====================================== 00:07:03.146 [2024-12-06T03:57:50.673Z] busy:2612699704 (cyc) 00:07:03.146 [2024-12-06T03:57:50.673Z] total_run_count: 381000 00:07:03.146 [2024-12-06T03:57:50.673Z] tsc_hz: 2600000000 (cyc) 00:07:03.146 [2024-12-06T03:57:50.673Z] ====================================== 00:07:03.146 [2024-12-06T03:57:50.673Z] poller_cost: 6857 (cyc), 2637 (nsec) 00:07:03.146 00:07:03.146 real 0m1.404s 00:07:03.146 user 0m1.233s 00:07:03.146 sys 0m0.064s 00:07:03.146 03:57:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.146 03:57:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.146 ************************************ 00:07:03.146 END TEST thread_poller_perf 00:07:03.146 ************************************ 00:07:03.147 03:57:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:03.147 03:57:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:03.147 03:57:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.147 03:57:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.147 ************************************ 00:07:03.147 START TEST thread_poller_perf 00:07:03.147 ************************************ 00:07:03.147 03:57:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:03.147 [2024-12-06 03:57:50.409605] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:03.147 [2024-12-06 03:57:50.409730] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59587 ] 00:07:03.147 [2024-12-06 03:57:50.563109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.147 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:03.147 [2024-12-06 03:57:50.646491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.518 [2024-12-06T03:57:52.045Z] ====================================== 00:07:04.518 [2024-12-06T03:57:52.045Z] busy:2602517534 (cyc) 00:07:04.518 [2024-12-06T03:57:52.045Z] total_run_count: 4773000 00:07:04.518 [2024-12-06T03:57:52.045Z] tsc_hz: 2600000000 (cyc) 00:07:04.518 [2024-12-06T03:57:52.045Z] ====================================== 00:07:04.518 [2024-12-06T03:57:52.045Z] poller_cost: 545 (cyc), 209 (nsec) 00:07:04.518 00:07:04.518 real 0m1.390s 00:07:04.518 user 0m1.223s 00:07:04.518 sys 0m0.062s 00:07:04.518 03:57:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.518 03:57:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.518 ************************************ 00:07:04.518 END TEST thread_poller_perf 00:07:04.518 ************************************ 00:07:04.518 03:57:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:04.518 00:07:04.518 real 0m3.011s 00:07:04.518 user 0m2.575s 00:07:04.518 sys 0m0.229s 00:07:04.518 ************************************ 00:07:04.518 END TEST thread 00:07:04.518 ************************************ 00:07:04.518 03:57:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.518 03:57:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.518 03:57:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:04.518 03:57:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:04.518 03:57:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.518 03:57:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.518 03:57:51 -- common/autotest_common.sh@10 -- # set +x 00:07:04.518 ************************************ 00:07:04.518 START TEST app_cmdline 00:07:04.518 ************************************ 00:07:04.518 03:57:51 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:04.518 * Looking for test storage... 00:07:04.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:04.518 03:57:51 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.518 03:57:51 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.518 03:57:51 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.518 03:57:51 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:04.518 03:57:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.519 03:57:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.519 03:57:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.519 03:57:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:04.519 03:57:51 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.519 03:57:51 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.519 --rc genhtml_branch_coverage=1 00:07:04.519 --rc genhtml_function_coverage=1 00:07:04.519 --rc genhtml_legend=1 00:07:04.519 --rc geninfo_all_blocks=1 00:07:04.519 --rc geninfo_unexecuted_blocks=1 00:07:04.519 00:07:04.519 ' 00:07:04.519 03:57:51 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.519 --rc genhtml_branch_coverage=1 00:07:04.519 --rc genhtml_function_coverage=1 00:07:04.519 --rc genhtml_legend=1 00:07:04.519 --rc geninfo_all_blocks=1 00:07:04.519 --rc geninfo_unexecuted_blocks=1 00:07:04.519 00:07:04.519 ' 00:07:04.519 03:57:51 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.519 --rc genhtml_branch_coverage=1 00:07:04.519 --rc genhtml_function_coverage=1 00:07:04.519 --rc genhtml_legend=1 00:07:04.519 --rc geninfo_all_blocks=1 00:07:04.519 --rc geninfo_unexecuted_blocks=1 00:07:04.519 00:07:04.519 ' 00:07:04.519 03:57:51 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.519 --rc genhtml_branch_coverage=1 00:07:04.519 --rc genhtml_function_coverage=1 00:07:04.519 --rc genhtml_legend=1 00:07:04.519 --rc geninfo_all_blocks=1 00:07:04.519 --rc geninfo_unexecuted_blocks=1 00:07:04.519 00:07:04.519 ' 00:07:04.519 03:57:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:04.519 03:57:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59670 00:07:04.519 03:57:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:04.519 03:57:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59670 00:07:04.519 03:57:51 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59670 ']' 00:07:04.519 03:57:51 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.519 03:57:51 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.519 03:57:51 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.519 03:57:51 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.519 03:57:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.519 [2024-12-06 03:57:52.038320] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:04.519 [2024-12-06 03:57:52.038773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59670 ] 00:07:04.776 [2024-12-06 03:57:52.191307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.776 [2024-12-06 03:57:52.286169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.386 03:57:52 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.386 03:57:52 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:05.386 03:57:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:05.643 { 00:07:05.643 "version": "SPDK v25.01-pre git sha1 02b805e62", 00:07:05.643 "fields": { 00:07:05.643 "major": 25, 00:07:05.643 "minor": 1, 00:07:05.643 "patch": 0, 00:07:05.643 "suffix": "-pre", 00:07:05.643 "commit": "02b805e62" 00:07:05.643 } 00:07:05.643 } 00:07:05.643 03:57:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:05.643 03:57:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:05.643 03:57:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:05.643 03:57:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:05.643 03:57:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:05.643 03:57:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:05.643 03:57:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:05.643 03:57:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.643 03:57:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.643 03:57:53 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.643 03:57:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:05.643 03:57:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:05.643 03:57:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.643 03:57:53 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:05.643 03:57:53 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.643 03:57:53 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.643 03:57:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.643 03:57:53 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.643 03:57:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.643 03:57:53 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.643 03:57:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.643 03:57:53 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.643 03:57:53 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:05.643 03:57:53 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.899 request: 00:07:05.899 { 00:07:05.899 "method": "env_dpdk_get_mem_stats", 00:07:05.899 "req_id": 1 00:07:05.899 } 00:07:05.899 Got JSON-RPC error response 00:07:05.899 response: 00:07:05.899 { 00:07:05.899 "code": -32601, 00:07:05.899 "message": "Method not found" 00:07:05.899 } 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:05.899 03:57:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59670 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59670 ']' 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59670 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59670 00:07:05.899 killing process with pid 59670 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59670' 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 59670 00:07:05.899 03:57:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 59670 00:07:07.268 ************************************ 00:07:07.268 END TEST app_cmdline 00:07:07.268 ************************************ 00:07:07.268 00:07:07.268 real 0m2.661s 00:07:07.268 user 0m2.924s 00:07:07.268 sys 0m0.398s 00:07:07.268 03:57:54 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.268 03:57:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.268 03:57:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:07.268 03:57:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.268 03:57:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.268 03:57:54 -- common/autotest_common.sh@10 -- # set +x 00:07:07.268 ************************************ 00:07:07.268 START TEST version 00:07:07.268 ************************************ 00:07:07.268 03:57:54 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:07.268 * Looking for test storage... 00:07:07.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:07.268 03:57:54 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:07.268 03:57:54 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:07.268 03:57:54 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:07.268 03:57:54 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:07.268 03:57:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.268 03:57:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.268 03:57:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.268 03:57:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.268 03:57:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.268 03:57:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.268 03:57:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.268 03:57:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.268 03:57:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.268 03:57:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.268 03:57:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.268 03:57:54 version -- scripts/common.sh@344 -- # case "$op" in 00:07:07.268 03:57:54 version -- scripts/common.sh@345 -- # : 1 00:07:07.268 03:57:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.268 03:57:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.268 03:57:54 version -- scripts/common.sh@365 -- # decimal 1 00:07:07.268 03:57:54 version -- scripts/common.sh@353 -- # local d=1 00:07:07.268 03:57:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.268 03:57:54 version -- scripts/common.sh@355 -- # echo 1 00:07:07.268 03:57:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.268 03:57:54 version -- scripts/common.sh@366 -- # decimal 2 00:07:07.268 03:57:54 version -- scripts/common.sh@353 -- # local d=2 00:07:07.268 03:57:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.268 03:57:54 version -- scripts/common.sh@355 -- # echo 2 00:07:07.268 03:57:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.268 03:57:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.268 03:57:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.268 03:57:54 version -- scripts/common.sh@368 -- # return 0 00:07:07.268 03:57:54 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.268 03:57:54 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:07.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.268 --rc genhtml_branch_coverage=1 00:07:07.268 --rc genhtml_function_coverage=1 00:07:07.268 --rc genhtml_legend=1 00:07:07.268 --rc geninfo_all_blocks=1 00:07:07.268 --rc geninfo_unexecuted_blocks=1 00:07:07.268 00:07:07.268 ' 00:07:07.268 03:57:54 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:07.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.268 --rc genhtml_branch_coverage=1 00:07:07.268 --rc genhtml_function_coverage=1 00:07:07.268 --rc genhtml_legend=1 00:07:07.268 --rc geninfo_all_blocks=1 00:07:07.268 --rc geninfo_unexecuted_blocks=1 00:07:07.268 00:07:07.268 ' 00:07:07.268 03:57:54 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:07.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.268 --rc genhtml_branch_coverage=1 00:07:07.268 --rc genhtml_function_coverage=1 00:07:07.268 --rc genhtml_legend=1 00:07:07.268 --rc geninfo_all_blocks=1 00:07:07.268 --rc geninfo_unexecuted_blocks=1 00:07:07.268 00:07:07.268 ' 00:07:07.268 03:57:54 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:07.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.268 --rc genhtml_branch_coverage=1 00:07:07.268 --rc genhtml_function_coverage=1 00:07:07.268 --rc genhtml_legend=1 00:07:07.268 --rc geninfo_all_blocks=1 00:07:07.268 --rc geninfo_unexecuted_blocks=1 00:07:07.268 00:07:07.268 ' 00:07:07.268 03:57:54 version -- app/version.sh@17 -- # get_header_version major 00:07:07.268 03:57:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.268 03:57:54 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.268 03:57:54 version -- app/version.sh@14 -- # cut -f2 00:07:07.268 03:57:54 version -- app/version.sh@17 -- # major=25 00:07:07.268 03:57:54 version -- app/version.sh@18 -- # get_header_version minor 00:07:07.268 03:57:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.268 03:57:54 version -- app/version.sh@14 -- # cut -f2 00:07:07.268 03:57:54 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.268 03:57:54 version -- app/version.sh@18 -- # minor=1 00:07:07.268 03:57:54 version -- app/version.sh@19 -- # get_header_version patch 00:07:07.269 03:57:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.269 03:57:54 version -- app/version.sh@14 -- # cut -f2 00:07:07.269 03:57:54 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.269 03:57:54 version -- app/version.sh@19 -- # patch=0 00:07:07.269 03:57:54 version -- app/version.sh@20 -- # get_header_version suffix 00:07:07.269 03:57:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.269 03:57:54 version -- app/version.sh@14 -- # cut -f2 00:07:07.269 03:57:54 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.269 03:57:54 version -- app/version.sh@20 -- # suffix=-pre 00:07:07.269 03:57:54 version -- app/version.sh@22 -- # version=25.1 00:07:07.269 03:57:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:07.269 03:57:54 version -- app/version.sh@28 -- # version=25.1rc0 00:07:07.269 03:57:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:07.269 03:57:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:07.269 03:57:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:07.269 03:57:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:07.269 ************************************ 00:07:07.269 END TEST version 00:07:07.269 ************************************ 00:07:07.269 00:07:07.269 real 0m0.187s 00:07:07.269 user 0m0.108s 00:07:07.269 sys 0m0.105s 00:07:07.269 03:57:54 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.269 03:57:54 version -- common/autotest_common.sh@10 -- # set +x 00:07:07.269 03:57:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:07.269 03:57:54 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:07.269 03:57:54 -- spdk/autotest.sh@194 -- # uname -s 00:07:07.269 03:57:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:07.269 03:57:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:07.269 03:57:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:07.269 03:57:54 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:07.269 03:57:54 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:07.269 03:57:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.269 03:57:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.269 03:57:54 -- common/autotest_common.sh@10 -- # set +x 00:07:07.269 ************************************ 00:07:07.269 START TEST blockdev_nvme 00:07:07.269 ************************************ 00:07:07.269 03:57:54 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:07.525 * Looking for test storage... 00:07:07.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:07.525 03:57:54 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:07.525 03:57:54 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:07.525 03:57:54 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:07.525 03:57:54 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.525 03:57:54 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:07.525 03:57:54 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.525 03:57:54 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:07.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.525 --rc genhtml_branch_coverage=1 00:07:07.525 --rc genhtml_function_coverage=1 00:07:07.525 --rc genhtml_legend=1 00:07:07.525 --rc geninfo_all_blocks=1 00:07:07.525 --rc geninfo_unexecuted_blocks=1 00:07:07.525 00:07:07.525 ' 00:07:07.525 03:57:54 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:07.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.525 --rc genhtml_branch_coverage=1 00:07:07.525 --rc genhtml_function_coverage=1 00:07:07.525 --rc genhtml_legend=1 00:07:07.525 --rc geninfo_all_blocks=1 00:07:07.525 --rc geninfo_unexecuted_blocks=1 00:07:07.525 00:07:07.525 ' 00:07:07.525 03:57:54 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:07.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.525 --rc genhtml_branch_coverage=1 00:07:07.525 --rc genhtml_function_coverage=1 00:07:07.525 --rc genhtml_legend=1 00:07:07.525 --rc geninfo_all_blocks=1 00:07:07.525 --rc geninfo_unexecuted_blocks=1 00:07:07.525 00:07:07.525 ' 00:07:07.525 03:57:54 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:07.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.525 --rc genhtml_branch_coverage=1 00:07:07.525 --rc genhtml_function_coverage=1 00:07:07.525 --rc genhtml_legend=1 00:07:07.525 --rc geninfo_all_blocks=1 00:07:07.525 --rc geninfo_unexecuted_blocks=1 00:07:07.525 00:07:07.525 ' 00:07:07.525 03:57:54 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:07.525 03:57:54 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:07.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59837 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59837 00:07:07.526 03:57:54 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59837 ']' 00:07:07.526 03:57:54 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.526 03:57:54 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.526 03:57:54 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.526 03:57:54 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.526 03:57:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.526 03:57:54 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:07.526 [2024-12-06 03:57:54.992241] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:07.526 [2024-12-06 03:57:54.992636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59837 ] 00:07:07.783 [2024-12-06 03:57:55.150471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.783 [2024-12-06 03:57:55.251946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.348 03:57:55 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.348 03:57:55 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:08.348 03:57:55 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:08.348 03:57:55 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:07:08.348 03:57:55 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:08.348 03:57:55 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:08.348 03:57:55 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:08.608 03:57:55 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:08.608 03:57:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.608 03:57:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.868 03:57:56 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.868 03:57:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:07:08.868 03:57:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.868 03:57:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.868 03:57:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.868 03:57:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:08.868 03:57:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.868 03:57:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:08.868 03:57:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.868 03:57:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:08.868 03:57:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:08.869 03:57:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "16f0a43d-36b4-4eb4-82ff-b7fff700918a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "16f0a43d-36b4-4eb4-82ff-b7fff700918a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "009dabbb-ad82-46f4-a042-678049e3945a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "009dabbb-ad82-46f4-a042-678049e3945a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2b4c2f86-1f70-4727-91f9-2e398d732447"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2b4c2f86-1f70-4727-91f9-2e398d732447",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8a91a113-c004-43a8-ac1c-9ea58ca8e71a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8a91a113-c004-43a8-ac1c-9ea58ca8e71a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8d2b128b-254c-4608-82ca-e56fc39ff08e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8d2b128b-254c-4608-82ca-e56fc39ff08e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "214a41a4-c2ba-4a08-be0f-ae2794bc2440"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "214a41a4-c2ba-4a08-be0f-ae2794bc2440",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:08.869 03:57:56 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:08.869 03:57:56 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:08.869 03:57:56 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:08.869 03:57:56 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59837 00:07:08.869 03:57:56 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59837 ']' 00:07:08.869 03:57:56 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59837 00:07:08.869 03:57:56 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:08.869 03:57:56 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.869 03:57:56 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59837 00:07:08.869 killing process with pid 59837 00:07:08.869 03:57:56 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.869 03:57:56 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.869 03:57:56 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59837' 00:07:08.869 03:57:56 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59837 00:07:08.869 03:57:56 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59837 00:07:10.770 03:57:57 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:10.770 03:57:57 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:10.770 03:57:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:10.770 03:57:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.770 03:57:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.770 ************************************ 00:07:10.770 START TEST bdev_hello_world 00:07:10.770 ************************************ 00:07:10.770 03:57:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:10.770 [2024-12-06 03:57:57.994546] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:10.770 [2024-12-06 03:57:57.994809] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59921 ] 00:07:10.770 [2024-12-06 03:57:58.149644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.770 [2024-12-06 03:57:58.247029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.336 [2024-12-06 03:57:58.786293] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:11.336 [2024-12-06 03:57:58.786480] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:11.336 [2024-12-06 03:57:58.786522] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:11.336 [2024-12-06 03:57:58.789037] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:11.336 [2024-12-06 03:57:58.789514] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:11.336 [2024-12-06 03:57:58.789597] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:11.336 [2024-12-06 03:57:58.789803] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:11.336 00:07:11.336 [2024-12-06 03:57:58.789878] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:12.270 00:07:12.270 ************************************ 00:07:12.270 END TEST bdev_hello_world 00:07:12.270 ************************************ 00:07:12.270 real 0m1.586s 00:07:12.270 user 0m1.314s 00:07:12.270 sys 0m0.165s 00:07:12.270 03:57:59 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.270 03:57:59 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:12.270 03:57:59 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:12.270 03:57:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.270 03:57:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.270 03:57:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.270 ************************************ 00:07:12.270 START TEST bdev_bounds 00:07:12.270 ************************************ 00:07:12.270 03:57:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:12.270 03:57:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59963 00:07:12.270 Process bdevio pid: 59963 00:07:12.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.270 03:57:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.270 03:57:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59963' 00:07:12.270 03:57:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59963 00:07:12.270 03:57:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59963 ']' 00:07:12.270 03:57:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.270 03:57:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.270 03:57:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.270 03:57:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.270 03:57:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:12.270 03:57:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:12.270 [2024-12-06 03:57:59.620749] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:12.270 [2024-12-06 03:57:59.620871] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59963 ] 00:07:12.270 [2024-12-06 03:57:59.781627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.528 [2024-12-06 03:57:59.884259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.528 [2024-12-06 03:57:59.884504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.528 [2024-12-06 03:57:59.884582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.095 03:58:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.095 03:58:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:13.095 03:58:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:13.095 I/O targets: 00:07:13.095 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:13.095 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:13.095 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.095 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.095 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.095 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:13.095 00:07:13.095 00:07:13.095 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.095 http://cunit.sourceforge.net/ 00:07:13.095 00:07:13.095 00:07:13.095 Suite: bdevio tests on: Nvme3n1 00:07:13.095 Test: blockdev write read block ...passed 00:07:13.095 Test: blockdev write zeroes read block ...passed 00:07:13.095 Test: blockdev write zeroes read no split ...passed 00:07:13.095 Test: blockdev write zeroes read split ...passed 00:07:13.095 Test: blockdev write zeroes read split partial ...passed 00:07:13.095 Test: blockdev reset ...[2024-12-06 03:58:00.602155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:13.095 [2024-12-06 03:58:00.605027] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:13.095 passed 00:07:13.095 Test: blockdev write read 8 blocks ...passed 00:07:13.095 Test: blockdev write read size > 128k ...passed 00:07:13.095 Test: blockdev write read invalid size ...passed 00:07:13.095 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.095 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.095 Test: blockdev write read max offset ...passed 00:07:13.095 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.095 Test: blockdev writev readv 8 blocks ...passed 00:07:13.095 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.095 Test: blockdev writev readv block ...passed 00:07:13.095 Test: blockdev writev readv size > 128k ...passed 00:07:13.095 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.095 Test: blockdev comparev and writev ...[2024-12-06 03:58:00.611101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b800a000 len:0x1000 00:07:13.095 [2024-12-06 03:58:00.611248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.095 passed 00:07:13.095 Test: blockdev nvme passthru rw ...passed 00:07:13.095 Test: blockdev nvme passthru vendor specific ...[2024-12-06 03:58:00.611848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.095 [2024-12-06 03:58:00.611914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:13.095 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:13.095 passed 00:07:13.095 Test: blockdev copy ...passed 00:07:13.095 Suite: bdevio tests on: Nvme2n3 00:07:13.095 Test: blockdev write read block ...passed 00:07:13.095 Test: blockdev write zeroes read block ...passed 00:07:13.095 Test: blockdev write zeroes read no split ...passed 00:07:13.354 Test: blockdev write zeroes read split ...passed 00:07:13.354 Test: blockdev write zeroes read split partial ...passed 00:07:13.354 Test: blockdev reset ...[2024-12-06 03:58:00.653189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:13.354 passed 00:07:13.354 Test: blockdev write read 8 blocks ...[2024-12-06 03:58:00.656143] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:13.354 passed 00:07:13.354 Test: blockdev write read size > 128k ...passed 00:07:13.354 Test: blockdev write read invalid size ...passed 00:07:13.354 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.354 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.354 Test: blockdev write read max offset ...passed 00:07:13.354 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.354 Test: blockdev writev readv 8 blocks ...passed 00:07:13.354 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.354 Test: blockdev writev readv block ...passed 00:07:13.354 Test: blockdev writev readv size > 128k ...passed 00:07:13.354 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.354 Test: blockdev comparev and writev ...[2024-12-06 03:58:00.661446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x293806000 len:0x1000 00:07:13.355 [2024-12-06 03:58:00.661486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.355 passed 00:07:13.355 Test: blockdev nvme passthru rw ...passed 00:07:13.355 Test: blockdev nvme passthru vendor specific ...[2024-12-06 03:58:00.662045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:13.355 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:13.355 [2024-12-06 03:58:00.662144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.355 passed 00:07:13.355 Test: blockdev copy ...passed 00:07:13.355 Suite: bdevio tests on: Nvme2n2 00:07:13.355 Test: blockdev write read block ...passed 00:07:13.355 Test: blockdev write zeroes read block ...passed 00:07:13.355 Test: blockdev write zeroes read no split ...passed 00:07:13.355 Test: blockdev write zeroes read split ...passed 00:07:13.355 Test: blockdev write zeroes read split partial ...passed 00:07:13.355 Test: blockdev reset ...[2024-12-06 03:58:00.704151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:13.355 [2024-12-06 03:58:00.707159] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:13.355 Test: blockdev write read 8 blocks ...uccessful. 00:07:13.355 passed 00:07:13.355 Test: blockdev write read size > 128k ...passed 00:07:13.355 Test: blockdev write read invalid size ...passed 00:07:13.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.355 Test: blockdev write read max offset ...passed 00:07:13.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.355 Test: blockdev writev readv 8 blocks ...passed 00:07:13.355 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.355 Test: blockdev writev readv block ...passed 00:07:13.355 Test: blockdev writev readv size > 128k ...passed 00:07:13.355 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.355 Test: blockdev comparev and writev ...[2024-12-06 03:58:00.712356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c303c000 len:0x1000 00:07:13.355 [2024-12-06 03:58:00.712404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.355 passed 00:07:13.355 Test: blockdev nvme passthru rw ...passed 00:07:13.355 Test: blockdev nvme passthru vendor specific ...[2024-12-06 03:58:00.712892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:13.355 Test: blockdev nvme admin passthru ...passed 00:07:13.355 Test: blockdev copy ... cid:190 PRP1 0x0 PRP2 0x0 00:07:13.355 [2024-12-06 03:58:00.712998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.355 passed 00:07:13.355 Suite: bdevio tests on: Nvme2n1 00:07:13.355 Test: blockdev write read block ...passed 00:07:13.355 Test: blockdev write zeroes read block ...passed 00:07:13.355 Test: blockdev write zeroes read no split ...passed 00:07:13.355 Test: blockdev write zeroes read split ...passed 00:07:13.355 Test: blockdev write zeroes read split partial ...passed 00:07:13.355 Test: blockdev reset ...[2024-12-06 03:58:00.758625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:13.355 [2024-12-06 03:58:00.761620] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:13.355 passed 00:07:13.355 Test: blockdev write read 8 blocks ...passed 00:07:13.355 Test: blockdev write read size > 128k ...passed 00:07:13.355 Test: blockdev write read invalid size ...passed 00:07:13.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.355 Test: blockdev write read max offset ...passed 00:07:13.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.355 Test: blockdev writev readv 8 blocks ...passed 00:07:13.355 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.355 Test: blockdev writev readv block ...passed 00:07:13.355 Test: blockdev writev readv size > 128k ...passed 00:07:13.355 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.355 Test: blockdev comparev and writev ...[2024-12-06 03:58:00.767729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3038000 len:0x1000 00:07:13.355 [2024-12-06 03:58:00.767861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.355 passed 00:07:13.355 Test: blockdev nvme passthru rw ...passed 00:07:13.355 Test: blockdev nvme passthru vendor specific ...[2024-12-06 03:58:00.768552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.355 [2024-12-06 03:58:00.768649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.355 passed 00:07:13.355 Test: blockdev nvme admin passthru ...passed 00:07:13.355 Test: blockdev copy ...passed 00:07:13.355 Suite: bdevio tests on: Nvme1n1 00:07:13.355 Test: blockdev write read block ...passed 00:07:13.355 Test: blockdev write zeroes read block ...passed 00:07:13.355 Test: blockdev write zeroes read no split ...passed 00:07:13.355 Test: blockdev write zeroes read split ...passed 00:07:13.355 Test: blockdev write zeroes read split partial ...passed 00:07:13.355 Test: blockdev reset ...[2024-12-06 03:58:00.817658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:13.355 [2024-12-06 03:58:00.822055] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:13.355 passed 00:07:13.355 Test: blockdev write read 8 blocks ...passed 00:07:13.355 Test: blockdev write read size > 128k ...passed 00:07:13.355 Test: blockdev write read invalid size ...passed 00:07:13.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.355 Test: blockdev write read max offset ...passed 00:07:13.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.355 Test: blockdev writev readv 8 blocks ...passed 00:07:13.355 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.355 Test: blockdev writev readv block ...passed 00:07:13.355 Test: blockdev writev readv size > 128k ...passed 00:07:13.355 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.355 Test: blockdev comparev and writev ...[2024-12-06 03:58:00.829057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3034000 len:0x1000 00:07:13.355 [2024-12-06 03:58:00.829106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.355 passed 00:07:13.355 Test: blockdev nvme passthru rw ...passed 00:07:13.355 Test: blockdev nvme passthru vendor specific ...[2024-12-06 03:58:00.829695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.355 [2024-12-06 03:58:00.829736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.355 passed 00:07:13.355 Test: blockdev nvme admin passthru ...passed 00:07:13.355 Test: blockdev copy ...passed 00:07:13.355 Suite: bdevio tests on: Nvme0n1 00:07:13.355 Test: blockdev write read block ...passed 00:07:13.619 Test: blockdev write zeroes read block ...passed 00:07:13.619 Test: blockdev write zeroes read no split ...passed 00:07:13.619 Test: blockdev write zeroes read split ...passed 00:07:13.619 Test: blockdev write zeroes read split partial ...passed 00:07:13.619 Test: blockdev reset ...[2024-12-06 03:58:00.957233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:13.619 [2024-12-06 03:58:00.960120] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:13.619 passed 00:07:13.619 Test: blockdev write read 8 blocks ...passed 00:07:13.619 Test: blockdev write read size > 128k ...passed 00:07:13.619 Test: blockdev write read invalid size ...passed 00:07:13.619 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.619 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.619 Test: blockdev write read max offset ...passed 00:07:13.619 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.619 Test: blockdev writev readv 8 blocks ...passed 00:07:13.619 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.619 Test: blockdev writev readv block ...passed 00:07:13.619 Test: blockdev writev readv size > 128k ...passed 00:07:13.619 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.619 Test: blockdev comparev and writev ...[2024-12-06 03:58:00.969557] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 spassed 00:07:13.619 Test: blockdev nvme passthru rw ...ince it has 00:07:13.619 separate metadata which is not supported yet. 00:07:13.619 passed 00:07:13.619 Test: blockdev nvme passthru vendor specific ...[2024-12-06 03:58:00.970168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:13.619 [2024-12-06 03:58:00.970210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:13.619 passed 00:07:13.619 Test: blockdev nvme admin passthru ...passed 00:07:13.619 Test: blockdev copy ...passed 00:07:13.619 00:07:13.619 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.619 suites 6 6 n/a 0 0 00:07:13.619 tests 138 138 138 0 0 00:07:13.619 asserts 893 893 893 0 n/a 00:07:13.619 00:07:13.619 Elapsed time = 1.076 seconds 00:07:13.619 0 00:07:13.619 03:58:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59963 00:07:13.619 03:58:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59963 ']' 00:07:13.619 03:58:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59963 00:07:13.619 03:58:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:13.619 03:58:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.619 03:58:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59963 00:07:13.619 03:58:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.619 03:58:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.619 03:58:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59963' 00:07:13.619 killing process with pid 59963 00:07:13.619 03:58:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59963 00:07:13.619 03:58:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59963 00:07:14.552 03:58:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:14.552 00:07:14.552 real 0m2.159s 00:07:14.552 user 0m5.486s 00:07:14.552 sys 0m0.284s 00:07:14.552 03:58:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.552 03:58:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:14.552 ************************************ 00:07:14.552 END TEST bdev_bounds 00:07:14.552 ************************************ 00:07:14.552 03:58:01 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:14.552 03:58:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:14.552 03:58:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.552 03:58:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:14.552 ************************************ 00:07:14.552 START TEST bdev_nbd 00:07:14.552 ************************************ 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60017 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60017 /var/tmp/spdk-nbd.sock 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60017 ']' 00:07:14.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:14.552 03:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:14.552 [2024-12-06 03:58:01.823982] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:14.552 [2024-12-06 03:58:01.824554] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.552 [2024-12-06 03:58:01.982939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.812 [2024-12-06 03:58:02.084270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.376 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.376 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:15.376 03:58:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:15.376 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.376 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.376 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:15.376 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:15.376 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.377 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.377 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:15.377 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:15.377 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:15.377 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:15.377 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.377 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.636 1+0 records in 00:07:15.636 1+0 records out 00:07:15.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331223 s, 12.4 MB/s 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.636 03:58:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:15.636 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:15.636 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.637 1+0 records in 00:07:15.637 1+0 records out 00:07:15.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429877 s, 9.5 MB/s 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.637 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.898 1+0 records in 00:07:15.898 1+0 records out 00:07:15.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303822 s, 13.5 MB/s 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.898 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.158 1+0 records in 00:07:16.158 1+0 records out 00:07:16.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282789 s, 14.5 MB/s 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:16.158 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:16.419 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:16.419 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:16.419 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:16.419 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:16.419 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.419 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.419 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.419 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:16.420 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.420 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.420 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.420 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.420 1+0 records in 00:07:16.420 1+0 records out 00:07:16.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276957 s, 14.8 MB/s 00:07:16.420 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.420 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.420 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.420 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.420 03:58:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.420 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.420 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:16.420 03:58:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.680 1+0 records in 00:07:16.680 1+0 records out 00:07:16.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532577 s, 7.7 MB/s 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:16.680 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.942 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:16.942 { 00:07:16.942 "nbd_device": "/dev/nbd0", 00:07:16.942 "bdev_name": "Nvme0n1" 00:07:16.942 }, 00:07:16.943 { 00:07:16.943 "nbd_device": "/dev/nbd1", 00:07:16.943 "bdev_name": "Nvme1n1" 00:07:16.943 }, 00:07:16.943 { 00:07:16.943 "nbd_device": "/dev/nbd2", 00:07:16.943 "bdev_name": "Nvme2n1" 00:07:16.943 }, 00:07:16.943 { 00:07:16.943 "nbd_device": "/dev/nbd3", 00:07:16.943 "bdev_name": "Nvme2n2" 00:07:16.943 }, 00:07:16.943 { 00:07:16.943 "nbd_device": "/dev/nbd4", 00:07:16.943 "bdev_name": "Nvme2n3" 00:07:16.943 }, 00:07:16.943 { 00:07:16.943 "nbd_device": "/dev/nbd5", 00:07:16.943 "bdev_name": "Nvme3n1" 00:07:16.943 } 00:07:16.943 ]' 00:07:16.943 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:16.943 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:16.943 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:16.943 { 00:07:16.943 "nbd_device": "/dev/nbd0", 00:07:16.943 "bdev_name": "Nvme0n1" 00:07:16.943 }, 00:07:16.943 { 00:07:16.943 "nbd_device": "/dev/nbd1", 00:07:16.943 "bdev_name": "Nvme1n1" 00:07:16.943 }, 00:07:16.943 { 00:07:16.943 "nbd_device": "/dev/nbd2", 00:07:16.943 "bdev_name": "Nvme2n1" 00:07:16.943 }, 00:07:16.943 { 00:07:16.943 "nbd_device": "/dev/nbd3", 00:07:16.943 "bdev_name": "Nvme2n2" 00:07:16.943 }, 00:07:16.943 { 00:07:16.943 "nbd_device": "/dev/nbd4", 00:07:16.943 "bdev_name": "Nvme2n3" 00:07:16.943 }, 00:07:16.943 { 00:07:16.943 "nbd_device": "/dev/nbd5", 00:07:16.943 "bdev_name": "Nvme3n1" 00:07:16.943 } 00:07:16.943 ]' 00:07:16.943 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:16.943 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.943 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:16.943 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.943 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:16.943 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.943 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:17.204 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.204 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.204 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.204 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.204 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.204 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.204 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.204 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.204 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.204 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.463 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.463 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.463 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.463 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.463 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.463 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.463 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.463 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.463 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.464 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:17.464 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:17.464 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:17.464 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:17.464 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.464 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.464 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:17.464 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.464 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.464 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.464 03:58:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:17.723 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:17.723 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:17.723 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:17.723 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.723 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.723 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:17.723 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.723 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.723 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.723 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:17.983 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:17.983 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:17.983 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:17.983 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.983 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.983 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:17.983 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.983 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.983 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.983 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.243 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.503 03:58:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:18.503 /dev/nbd0 00:07:18.503 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:18.503 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:18.503 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:18.503 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.503 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.503 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.503 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:18.503 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.503 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.503 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.503 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.764 1+0 records in 00:07:18.764 1+0 records out 00:07:18.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358185 s, 11.4 MB/s 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:18.764 /dev/nbd1 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:18.764 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.765 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.765 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.765 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.765 1+0 records in 00:07:18.765 1+0 records out 00:07:18.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449599 s, 9.1 MB/s 00:07:18.765 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.765 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.765 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.765 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.765 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.765 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.765 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.765 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:19.026 /dev/nbd10 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.026 1+0 records in 00:07:19.026 1+0 records out 00:07:19.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311365 s, 13.2 MB/s 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:19.026 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:19.287 /dev/nbd11 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.287 1+0 records in 00:07:19.287 1+0 records out 00:07:19.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069061 s, 5.9 MB/s 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.287 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:19.288 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:19.549 /dev/nbd12 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.549 1+0 records in 00:07:19.549 1+0 records out 00:07:19.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004269 s, 9.6 MB/s 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.549 03:58:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.549 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.549 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.549 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.549 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:19.549 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:19.810 /dev/nbd13 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.810 1+0 records in 00:07:19.810 1+0 records out 00:07:19.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400086 s, 10.2 MB/s 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.810 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.071 { 00:07:20.071 "nbd_device": "/dev/nbd0", 00:07:20.071 "bdev_name": "Nvme0n1" 00:07:20.071 }, 00:07:20.071 { 00:07:20.071 "nbd_device": "/dev/nbd1", 00:07:20.071 "bdev_name": "Nvme1n1" 00:07:20.071 }, 00:07:20.071 { 00:07:20.071 "nbd_device": "/dev/nbd10", 00:07:20.071 "bdev_name": "Nvme2n1" 00:07:20.071 }, 00:07:20.071 { 00:07:20.071 "nbd_device": "/dev/nbd11", 00:07:20.071 "bdev_name": "Nvme2n2" 00:07:20.071 }, 00:07:20.071 { 00:07:20.071 "nbd_device": "/dev/nbd12", 00:07:20.071 "bdev_name": "Nvme2n3" 00:07:20.071 }, 00:07:20.071 { 00:07:20.071 "nbd_device": "/dev/nbd13", 00:07:20.071 "bdev_name": "Nvme3n1" 00:07:20.071 } 00:07:20.071 ]' 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.071 { 00:07:20.071 "nbd_device": "/dev/nbd0", 00:07:20.071 "bdev_name": "Nvme0n1" 00:07:20.071 }, 00:07:20.071 { 00:07:20.071 "nbd_device": "/dev/nbd1", 00:07:20.071 "bdev_name": "Nvme1n1" 00:07:20.071 }, 00:07:20.071 { 00:07:20.071 "nbd_device": "/dev/nbd10", 00:07:20.071 "bdev_name": "Nvme2n1" 00:07:20.071 }, 00:07:20.071 { 00:07:20.071 "nbd_device": "/dev/nbd11", 00:07:20.071 "bdev_name": "Nvme2n2" 00:07:20.071 }, 00:07:20.071 { 00:07:20.071 "nbd_device": "/dev/nbd12", 00:07:20.071 "bdev_name": "Nvme2n3" 00:07:20.071 }, 00:07:20.071 { 00:07:20.071 "nbd_device": "/dev/nbd13", 00:07:20.071 "bdev_name": "Nvme3n1" 00:07:20.071 } 00:07:20.071 ]' 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:20.071 /dev/nbd1 00:07:20.071 /dev/nbd10 00:07:20.071 /dev/nbd11 00:07:20.071 /dev/nbd12 00:07:20.071 /dev/nbd13' 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:20.071 /dev/nbd1 00:07:20.071 /dev/nbd10 00:07:20.071 /dev/nbd11 00:07:20.071 /dev/nbd12 00:07:20.071 /dev/nbd13' 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:20.071 256+0 records in 00:07:20.071 256+0 records out 00:07:20.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00915016 s, 115 MB/s 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:20.071 256+0 records in 00:07:20.071 256+0 records out 00:07:20.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0622537 s, 16.8 MB/s 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.071 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.499 256+0 records in 00:07:20.499 256+0 records out 00:07:20.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0756149 s, 13.9 MB/s 00:07:20.499 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.499 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:20.499 256+0 records in 00:07:20.499 256+0 records out 00:07:20.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0546986 s, 19.2 MB/s 00:07:20.499 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.499 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:20.499 256+0 records in 00:07:20.499 256+0 records out 00:07:20.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0538541 s, 19.5 MB/s 00:07:20.499 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.499 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:20.780 256+0 records in 00:07:20.780 256+0 records out 00:07:20.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0528275 s, 19.8 MB/s 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:20.780 256+0 records in 00:07:20.780 256+0 records out 00:07:20.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0578445 s, 18.1 MB/s 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.780 03:58:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.780 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:21.040 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:21.040 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:21.040 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:21.040 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.040 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.040 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:21.040 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.040 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.040 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.040 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:21.300 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:21.300 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:21.300 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:21.300 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.300 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.300 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:21.300 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.300 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.300 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.300 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:21.559 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:21.559 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:21.559 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:21.559 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.559 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.559 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:21.559 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.559 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.559 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.559 03:58:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:21.818 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:21.818 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:21.818 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:21.818 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.818 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.818 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:21.818 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.818 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.818 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.818 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.818 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:22.078 malloc_lvol_verify 00:07:22.078 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:22.337 53f92cd8-2eee-4faf-8f79-9d87b3e174e1 00:07:22.337 03:58:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:22.596 c00a74c1-c95c-4b08-8fa5-e62fa6cdd26e 00:07:22.596 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:22.853 /dev/nbd0 00:07:22.853 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:22.853 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:22.853 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:22.853 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:22.853 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:22.853 mke2fs 1.47.0 (5-Feb-2023) 00:07:22.853 Discarding device blocks: 0/4096 done 00:07:22.853 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:22.853 00:07:22.853 Allocating group tables: 0/1 done 00:07:22.853 Writing inode tables: 0/1 done 00:07:22.853 Creating journal (1024 blocks): done 00:07:22.853 Writing superblocks and filesystem accounting information: 0/1 done 00:07:22.853 00:07:22.853 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:22.853 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.853 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:22.853 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.853 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:22.853 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.853 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60017 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60017 ']' 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60017 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60017 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.110 killing process with pid 60017 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60017' 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60017 00:07:23.110 03:58:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60017 00:07:24.478 03:58:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:24.478 00:07:24.478 real 0m9.826s 00:07:24.478 user 0m13.987s 00:07:24.478 sys 0m3.036s 00:07:24.478 03:58:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.478 03:58:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:24.478 ************************************ 00:07:24.478 END TEST bdev_nbd 00:07:24.478 ************************************ 00:07:24.478 03:58:11 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:24.478 03:58:11 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:24.478 skipping fio tests on NVMe due to multi-ns failures. 00:07:24.478 03:58:11 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:24.478 03:58:11 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:24.478 03:58:11 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:24.478 03:58:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:24.478 03:58:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.478 03:58:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:24.478 ************************************ 00:07:24.478 START TEST bdev_verify 00:07:24.478 ************************************ 00:07:24.478 03:58:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:24.478 [2024-12-06 03:58:11.684393] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:24.478 [2024-12-06 03:58:11.684505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60385 ] 00:07:24.478 [2024-12-06 03:58:11.845146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.478 [2024-12-06 03:58:11.945741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.478 [2024-12-06 03:58:11.945774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.043 Running I/O for 5 seconds... 00:07:27.368 20800.00 IOPS, 81.25 MiB/s [2024-12-06T03:58:15.825Z] 22016.00 IOPS, 86.00 MiB/s [2024-12-06T03:58:16.820Z] 22869.33 IOPS, 89.33 MiB/s [2024-12-06T03:58:17.750Z] 23104.00 IOPS, 90.25 MiB/s [2024-12-06T03:58:17.750Z] 23744.00 IOPS, 92.75 MiB/s 00:07:30.223 Latency(us) 00:07:30.223 [2024-12-06T03:58:17.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.223 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.223 Verification LBA range: start 0x0 length 0xbd0bd 00:07:30.223 Nvme0n1 : 5.06 1947.76 7.61 0.00 0.00 65550.54 12703.90 76223.41 00:07:30.223 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.223 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:30.223 Nvme0n1 : 5.07 1971.05 7.70 0.00 0.00 64783.63 13208.02 75416.81 00:07:30.223 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.223 Verification LBA range: start 0x0 length 0xa0000 00:07:30.224 Nvme1n1 : 5.06 1946.59 7.60 0.00 0.00 65490.08 15526.99 66947.54 00:07:30.224 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.224 Verification LBA range: start 0xa0000 length 0xa0000 00:07:30.224 Nvme1n1 : 5.07 1969.40 7.69 0.00 0.00 64612.75 15526.99 62914.56 00:07:30.224 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.224 Verification LBA range: start 0x0 length 0x80000 00:07:30.224 Nvme2n1 : 5.06 1946.05 7.60 0.00 0.00 65397.46 16736.89 64527.75 00:07:30.224 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.224 Verification LBA range: start 0x80000 length 0x80000 00:07:30.224 Nvme2n1 : 5.07 1968.31 7.69 0.00 0.00 64482.51 16938.54 60494.77 00:07:30.224 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.224 Verification LBA range: start 0x0 length 0x80000 00:07:30.224 Nvme2n2 : 5.07 1945.33 7.60 0.00 0.00 65282.76 16938.54 62107.96 00:07:30.224 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.224 Verification LBA range: start 0x80000 length 0x80000 00:07:30.224 Nvme2n2 : 5.07 1967.79 7.69 0.00 0.00 64339.83 16434.41 60898.07 00:07:30.224 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.224 Verification LBA range: start 0x0 length 0x80000 00:07:30.224 Nvme2n3 : 5.07 1944.53 7.60 0.00 0.00 65158.51 14115.45 62914.56 00:07:30.224 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.224 Verification LBA range: start 0x80000 length 0x80000 00:07:30.224 Nvme2n3 : 5.08 1967.24 7.68 0.00 0.00 64223.65 12905.55 62511.26 00:07:30.224 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.224 Verification LBA range: start 0x0 length 0x20000 00:07:30.224 Nvme3n1 : 5.07 1944.00 7.59 0.00 0.00 65021.34 11544.42 67350.84 00:07:30.224 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.224 Verification LBA range: start 0x20000 length 0x20000 00:07:30.224 Nvme3n1 : 5.08 1977.03 7.72 0.00 0.00 63847.27 2545.82 65737.65 00:07:30.224 [2024-12-06T03:58:17.751Z] =================================================================================================================== 00:07:30.224 [2024-12-06T03:58:17.751Z] Total : 23495.10 91.78 0.00 0.00 64845.64 2545.82 76223.41 00:07:32.127 00:07:32.127 real 0m7.894s 00:07:32.127 user 0m14.885s 00:07:32.127 sys 0m0.213s 00:07:32.127 03:58:19 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.127 03:58:19 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:32.127 ************************************ 00:07:32.127 END TEST bdev_verify 00:07:32.127 ************************************ 00:07:32.127 03:58:19 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:32.127 03:58:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:32.128 03:58:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.128 03:58:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:32.128 ************************************ 00:07:32.128 START TEST bdev_verify_big_io 00:07:32.128 ************************************ 00:07:32.128 03:58:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:32.128 [2024-12-06 03:58:19.618994] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:32.128 [2024-12-06 03:58:19.619107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60483 ] 00:07:32.385 [2024-12-06 03:58:19.777886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.385 [2024-12-06 03:58:19.877195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.385 [2024-12-06 03:58:19.877335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.315 Running I/O for 5 seconds... 00:07:38.486 1632.00 IOPS, 102.00 MiB/s [2024-12-06T03:58:26.577Z] 2289.00 IOPS, 143.06 MiB/s [2024-12-06T03:58:26.577Z] 2531.33 IOPS, 158.21 MiB/s 00:07:39.050 Latency(us) 00:07:39.050 [2024-12-06T03:58:26.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.050 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.050 Verification LBA range: start 0x0 length 0xbd0b 00:07:39.050 Nvme0n1 : 5.73 112.80 7.05 0.00 0.00 1071735.70 14115.45 1038896.84 00:07:39.050 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.050 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:39.050 Nvme0n1 : 5.75 111.22 6.95 0.00 0.00 1110588.49 13308.85 1122782.92 00:07:39.050 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.050 Verification LBA range: start 0x0 length 0xa000 00:07:39.050 Nvme1n1 : 5.76 122.26 7.64 0.00 0.00 996634.42 26416.05 948557.98 00:07:39.050 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.050 Verification LBA range: start 0xa000 length 0xa000 00:07:39.050 Nvme1n1 : 5.78 111.19 6.95 0.00 0.00 1071745.28 105664.20 1193763.45 00:07:39.050 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.050 Verification LBA range: start 0x0 length 0x8000 00:07:39.050 Nvme2n1 : 5.76 122.18 7.64 0.00 0.00 970936.61 28835.84 1077613.49 00:07:39.050 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.050 Verification LBA range: start 0x8000 length 0x8000 00:07:39.050 Nvme2n1 : 5.79 114.40 7.15 0.00 0.00 1010947.71 27021.00 948557.98 00:07:39.050 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.050 Verification LBA range: start 0x0 length 0x8000 00:07:39.050 Nvme2n2 : 5.76 122.14 7.63 0.00 0.00 946101.24 29642.44 1096971.82 00:07:39.050 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.050 Verification LBA range: start 0x8000 length 0x8000 00:07:39.050 Nvme2n2 : 5.81 118.93 7.43 0.00 0.00 943181.30 20769.87 1006632.96 00:07:39.050 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.050 Verification LBA range: start 0x0 length 0x8000 00:07:39.050 Nvme2n3 : 5.77 128.35 8.02 0.00 0.00 882645.32 2545.82 1116330.14 00:07:39.050 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.050 Verification LBA range: start 0x8000 length 0x8000 00:07:39.050 Nvme2n3 : 5.88 127.90 7.99 0.00 0.00 850275.63 18350.08 2013265.92 00:07:39.050 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.050 Verification LBA range: start 0x0 length 0x2000 00:07:39.050 Nvme3n1 : 5.78 132.98 8.31 0.00 0.00 829716.99 4612.73 1129235.69 00:07:39.050 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.050 Verification LBA range: start 0x2000 length 0x2000 00:07:39.050 Nvme3n1 : 5.94 164.07 10.25 0.00 0.00 644920.57 507.27 2064888.12 00:07:39.050 [2024-12-06T03:58:26.577Z] =================================================================================================================== 00:07:39.050 [2024-12-06T03:58:26.577Z] Total : 1488.44 93.03 0.00 0.00 929771.58 507.27 2064888.12 00:07:41.569 00:07:41.569 real 0m9.296s 00:07:41.569 user 0m17.699s 00:07:41.569 sys 0m0.225s 00:07:41.569 03:58:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.569 03:58:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:41.569 ************************************ 00:07:41.569 END TEST bdev_verify_big_io 00:07:41.569 ************************************ 00:07:41.569 03:58:28 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.570 03:58:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:41.570 03:58:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.570 03:58:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.570 ************************************ 00:07:41.570 START TEST bdev_write_zeroes 00:07:41.570 ************************************ 00:07:41.570 03:58:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.570 [2024-12-06 03:58:28.956586] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:41.570 [2024-12-06 03:58:28.956701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60594 ] 00:07:41.827 [2024-12-06 03:58:29.111954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.827 [2024-12-06 03:58:29.194409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.390 Running I/O for 1 seconds... 00:07:44.267 11014.00 IOPS, 43.02 MiB/s [2024-12-06T03:58:32.059Z] 5874.00 IOPS, 22.95 MiB/s 00:07:44.532 Latency(us) 00:07:44.532 [2024-12-06T03:58:32.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.532 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.532 Nvme0n1 : 2.16 595.79 2.33 0.00 0.00 148766.81 4285.05 1542213.32 00:07:44.532 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.532 Nvme1n1 : 1.21 1694.95 6.62 0.00 0.00 75116.86 8620.50 490410.93 00:07:44.532 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.532 Nvme2n1 : 1.20 1701.87 6.65 0.00 0.00 74940.10 8469.27 487184.54 00:07:44.532 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.532 Nvme2n2 : 1.20 1753.26 6.85 0.00 0.00 72642.23 8469.27 487184.54 00:07:44.532 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.532 Nvme2n3 : 1.21 1751.48 6.84 0.00 0.00 72557.51 8469.27 483958.15 00:07:44.532 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.532 Nvme3n1 : 1.21 1802.65 7.04 0.00 0.00 70299.43 8318.03 362968.62 00:07:44.532 [2024-12-06T03:58:32.059Z] =================================================================================================================== 00:07:44.532 [2024-12-06T03:58:32.059Z] Total : 9300.01 36.33 0.00 0.00 81332.90 4285.05 1542213.32 00:07:45.919 00:07:45.919 real 0m4.317s 00:07:45.919 user 0m3.997s 00:07:45.919 sys 0m0.202s 00:07:45.919 03:58:33 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.919 ************************************ 00:07:45.919 END TEST bdev_write_zeroes 00:07:45.919 ************************************ 00:07:45.919 03:58:33 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:45.919 03:58:33 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.919 03:58:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:45.919 03:58:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.919 03:58:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.919 ************************************ 00:07:45.919 START TEST bdev_json_nonenclosed 00:07:45.919 ************************************ 00:07:45.919 03:58:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.919 [2024-12-06 03:58:33.340386] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:45.919 [2024-12-06 03:58:33.340502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60663 ] 00:07:46.178 [2024-12-06 03:58:33.501599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.178 [2024-12-06 03:58:33.602905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.178 [2024-12-06 03:58:33.602974] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:46.178 [2024-12-06 03:58:33.602989] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:46.178 [2024-12-06 03:58:33.602999] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.437 00:07:46.437 real 0m0.509s 00:07:46.437 user 0m0.311s 00:07:46.437 sys 0m0.094s 00:07:46.437 03:58:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.437 ************************************ 00:07:46.437 END TEST bdev_json_nonenclosed 00:07:46.437 ************************************ 00:07:46.437 03:58:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:46.437 03:58:33 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.437 03:58:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:46.437 03:58:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.437 03:58:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.437 ************************************ 00:07:46.437 START TEST bdev_json_nonarray 00:07:46.437 ************************************ 00:07:46.437 03:58:33 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.437 [2024-12-06 03:58:33.904320] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:46.437 [2024-12-06 03:58:33.904438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60694 ] 00:07:46.698 [2024-12-06 03:58:34.061914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.698 [2024-12-06 03:58:34.163130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.698 [2024-12-06 03:58:34.163220] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:46.698 [2024-12-06 03:58:34.163237] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:46.698 [2024-12-06 03:58:34.163247] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.960 00:07:46.960 real 0m0.502s 00:07:46.960 user 0m0.317s 00:07:46.960 sys 0m0.080s 00:07:46.960 03:58:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.960 ************************************ 00:07:46.960 END TEST bdev_json_nonarray 00:07:46.960 ************************************ 00:07:46.960 03:58:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:46.960 03:58:34 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:46.960 03:58:34 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:46.960 03:58:34 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:46.960 03:58:34 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:46.960 03:58:34 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:46.960 03:58:34 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:46.960 03:58:34 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:46.960 03:58:34 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:46.960 03:58:34 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:46.960 03:58:34 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:46.960 03:58:34 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:46.960 00:07:46.960 real 0m39.648s 00:07:46.960 user 1m1.296s 00:07:46.960 sys 0m5.012s 00:07:46.960 03:58:34 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.960 ************************************ 00:07:46.960 END TEST blockdev_nvme 00:07:46.960 ************************************ 00:07:46.960 03:58:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.960 03:58:34 -- spdk/autotest.sh@209 -- # uname -s 00:07:46.960 03:58:34 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:46.960 03:58:34 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:46.960 03:58:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.960 03:58:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.960 03:58:34 -- common/autotest_common.sh@10 -- # set +x 00:07:46.960 ************************************ 00:07:46.960 START TEST blockdev_nvme_gpt 00:07:46.960 ************************************ 00:07:46.960 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:47.223 * Looking for test storage... 00:07:47.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:47.223 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:47.223 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:07:47.223 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:47.223 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.223 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.224 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.224 03:58:34 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:47.224 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.224 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:47.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.224 --rc genhtml_branch_coverage=1 00:07:47.224 --rc genhtml_function_coverage=1 00:07:47.224 --rc genhtml_legend=1 00:07:47.224 --rc geninfo_all_blocks=1 00:07:47.224 --rc geninfo_unexecuted_blocks=1 00:07:47.224 00:07:47.224 ' 00:07:47.224 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:47.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.224 --rc genhtml_branch_coverage=1 00:07:47.224 --rc genhtml_function_coverage=1 00:07:47.224 --rc genhtml_legend=1 00:07:47.224 --rc geninfo_all_blocks=1 00:07:47.224 --rc geninfo_unexecuted_blocks=1 00:07:47.224 00:07:47.224 ' 00:07:47.224 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:47.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.224 --rc genhtml_branch_coverage=1 00:07:47.224 --rc genhtml_function_coverage=1 00:07:47.224 --rc genhtml_legend=1 00:07:47.224 --rc geninfo_all_blocks=1 00:07:47.224 --rc geninfo_unexecuted_blocks=1 00:07:47.224 00:07:47.224 ' 00:07:47.224 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:47.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.224 --rc genhtml_branch_coverage=1 00:07:47.224 --rc genhtml_function_coverage=1 00:07:47.224 --rc genhtml_legend=1 00:07:47.224 --rc geninfo_all_blocks=1 00:07:47.224 --rc geninfo_unexecuted_blocks=1 00:07:47.224 00:07:47.224 ' 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60777 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60777 00:07:47.224 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60777 ']' 00:07:47.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.224 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.224 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.224 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.224 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.224 03:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.224 03:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:47.224 [2024-12-06 03:58:34.678940] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:47.224 [2024-12-06 03:58:34.679063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60777 ] 00:07:47.485 [2024-12-06 03:58:34.837309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.485 [2024-12-06 03:58:34.943515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.055 03:58:35 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.055 03:58:35 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:48.055 03:58:35 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:48.055 03:58:35 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:48.055 03:58:35 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:48.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:48.627 Waiting for block devices as requested 00:07:48.627 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.627 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.887 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.887 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:54.195 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:54.195 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:54.195 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:54.196 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.196 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:54.196 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:54.196 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:54.196 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:54.196 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:54.196 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:54.196 03:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:54.196 BYT; 00:07:54.196 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:54.196 BYT; 00:07:54.196 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:54.196 03:58:41 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:54.196 03:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:55.131 The operation has completed successfully. 00:07:55.131 03:58:42 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:56.068 The operation has completed successfully. 00:07:56.068 03:58:43 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:56.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:56.893 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.893 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.893 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.893 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.151 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:57.151 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.151 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.151 [] 00:07:57.151 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.151 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:57.151 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:57.151 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:57.151 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:57.151 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:57.151 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.151 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.411 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.411 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:57.411 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.411 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.411 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.411 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:57.411 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:57.411 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.411 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.411 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:57.411 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:57.412 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "076c6380-2ea1-4537-ba00-6102f46a93c5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "076c6380-2ea1-4537-ba00-6102f46a93c5",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "cf8f87dc-5452-4073-961c-37c8172ce0bb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cf8f87dc-5452-4073-961c-37c8172ce0bb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "9b2a308d-9d03-4016-9c39-0fe49634ceea"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9b2a308d-9d03-4016-9c39-0fe49634ceea",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "99c97773-3a80-4170-9f1f-dead10a526b5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "99c97773-3a80-4170-9f1f-dead10a526b5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d4ee9e7d-efc8-4664-b1d8-373e610b5b5c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d4ee9e7d-efc8-4664-b1d8-373e610b5b5c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:57.412 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:57.412 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:57.412 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:57.412 03:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60777 00:07:57.412 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60777 ']' 00:07:57.412 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60777 00:07:57.412 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:57.412 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.412 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60777 00:07:57.670 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.670 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.670 killing process with pid 60777 00:07:57.670 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60777' 00:07:57.670 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60777 00:07:57.670 03:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60777 00:07:59.043 03:58:46 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:59.043 03:58:46 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:59.043 03:58:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:59.043 03:58:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.043 03:58:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:59.043 ************************************ 00:07:59.043 START TEST bdev_hello_world 00:07:59.043 ************************************ 00:07:59.043 03:58:46 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:59.043 [2024-12-06 03:58:46.284233] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:07:59.043 [2024-12-06 03:58:46.284389] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61393 ] 00:07:59.043 [2024-12-06 03:58:46.440184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.043 [2024-12-06 03:58:46.524118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.611 [2024-12-06 03:58:47.025365] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:59.611 [2024-12-06 03:58:47.025407] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:59.611 [2024-12-06 03:58:47.025424] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:59.611 [2024-12-06 03:58:47.027457] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:59.611 [2024-12-06 03:58:47.027890] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:59.611 [2024-12-06 03:58:47.027910] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:59.611 [2024-12-06 03:58:47.028017] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:59.611 00:07:59.611 [2024-12-06 03:58:47.028032] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:00.176 00:08:00.176 real 0m1.396s 00:08:00.176 user 0m1.125s 00:08:00.176 sys 0m0.163s 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:00.176 ************************************ 00:08:00.176 END TEST bdev_hello_world 00:08:00.176 ************************************ 00:08:00.176 03:58:47 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:00.176 03:58:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:00.176 03:58:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.176 03:58:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:00.176 ************************************ 00:08:00.176 START TEST bdev_bounds 00:08:00.176 ************************************ 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61424 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:00.176 Process bdevio pid: 61424 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61424' 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61424 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61424 ']' 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.176 03:58:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:00.437 [2024-12-06 03:58:47.716080] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:08:00.437 [2024-12-06 03:58:47.716197] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61424 ] 00:08:00.437 [2024-12-06 03:58:47.872135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.437 [2024-12-06 03:58:47.958604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.437 [2024-12-06 03:58:47.958670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.437 [2024-12-06 03:58:47.958684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.368 03:58:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.368 03:58:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:01.368 03:58:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:01.368 I/O targets: 00:08:01.368 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:01.368 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:01.368 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:01.368 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.368 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.368 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.368 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:01.368 00:08:01.368 00:08:01.368 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.368 http://cunit.sourceforge.net/ 00:08:01.368 00:08:01.368 00:08:01.368 Suite: bdevio tests on: Nvme3n1 00:08:01.368 Test: blockdev write read block ...passed 00:08:01.368 Test: blockdev write zeroes read block ...passed 00:08:01.368 Test: blockdev write zeroes read no split ...passed 00:08:01.368 Test: blockdev write zeroes read split ...passed 00:08:01.368 Test: blockdev write zeroes read split partial ...passed 00:08:01.369 Test: blockdev reset ...[2024-12-06 03:58:48.684197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:01.369 [2024-12-06 03:58:48.686765] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:01.369 passed 00:08:01.369 Test: blockdev write read 8 blocks ...passed 00:08:01.369 Test: blockdev write read size > 128k ...passed 00:08:01.369 Test: blockdev write read invalid size ...passed 00:08:01.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.369 Test: blockdev write read max offset ...passed 00:08:01.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.369 Test: blockdev writev readv 8 blocks ...passed 00:08:01.369 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.369 Test: blockdev writev readv block ...passed 00:08:01.369 Test: blockdev writev readv size > 128k ...passed 00:08:01.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.369 Test: blockdev comparev and writev ...[2024-12-06 03:58:48.692825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5804000 len:0x1000 00:08:01.369 [2024-12-06 03:58:48.692871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.369 passed 00:08:01.369 Test: blockdev nvme passthru rw ...passed 00:08:01.369 Test: blockdev nvme passthru vendor specific ...[2024-12-06 03:58:48.693387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.369 [2024-12-06 03:58:48.693413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.369 passed 00:08:01.369 Test: blockdev nvme admin passthru ...passed 00:08:01.369 Test: blockdev copy ...passed 00:08:01.369 Suite: bdevio tests on: Nvme2n3 00:08:01.369 Test: blockdev write read block ...passed 00:08:01.369 Test: blockdev write zeroes read block ...passed 00:08:01.369 Test: blockdev write zeroes read no split ...passed 00:08:01.369 Test: blockdev write zeroes read split ...passed 00:08:01.369 Test: blockdev write zeroes read split partial ...passed 00:08:01.369 Test: blockdev reset ...[2024-12-06 03:58:48.735704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:01.369 [2024-12-06 03:58:48.740173] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:01.369 passed 00:08:01.369 Test: blockdev write read 8 blocks ...passed 00:08:01.369 Test: blockdev write read size > 128k ...passed 00:08:01.369 Test: blockdev write read invalid size ...passed 00:08:01.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.369 Test: blockdev write read max offset ...passed 00:08:01.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.369 Test: blockdev writev readv 8 blocks ...passed 00:08:01.369 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.369 Test: blockdev writev readv block ...passed 00:08:01.369 Test: blockdev writev readv size > 128k ...passed 00:08:01.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.369 Test: blockdev comparev and writev ...[2024-12-06 03:58:48.746261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5802000 len:0x1000 00:08:01.369 [2024-12-06 03:58:48.746307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.369 passed 00:08:01.369 Test: blockdev nvme passthru rw ...passed 00:08:01.369 Test: blockdev nvme passthru vendor specific ...[2024-12-06 03:58:48.746876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.369 [2024-12-06 03:58:48.746904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.369 passed 00:08:01.369 Test: blockdev nvme admin passthru ...passed 00:08:01.369 Test: blockdev copy ...passed 00:08:01.369 Suite: bdevio tests on: Nvme2n2 00:08:01.369 Test: blockdev write read block ...passed 00:08:01.369 Test: blockdev write zeroes read block ...passed 00:08:01.369 Test: blockdev write zeroes read no split ...passed 00:08:01.369 Test: blockdev write zeroes read split ...passed 00:08:01.369 Test: blockdev write zeroes read split partial ...passed 00:08:01.369 Test: blockdev reset ...[2024-12-06 03:58:48.801856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:01.369 [2024-12-06 03:58:48.804535] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:01.369 passed 00:08:01.369 Test: blockdev write read 8 blocks ...passed 00:08:01.369 Test: blockdev write read size > 128k ...passed 00:08:01.369 Test: blockdev write read invalid size ...passed 00:08:01.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.369 Test: blockdev write read max offset ...passed 00:08:01.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.369 Test: blockdev writev readv 8 blocks ...passed 00:08:01.369 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.369 Test: blockdev writev readv block ...passed 00:08:01.369 Test: blockdev writev readv size > 128k ...passed 00:08:01.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.369 Test: blockdev comparev and writev ...[2024-12-06 03:58:48.810462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be638000 len:0x1000 00:08:01.369 [2024-12-06 03:58:48.810503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.369 passed 00:08:01.369 Test: blockdev nvme passthru rw ...passed 00:08:01.369 Test: blockdev nvme passthru vendor specific ...passed 00:08:01.369 Test: blockdev nvme admin passthru ...[2024-12-06 03:58:48.811069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.369 [2024-12-06 03:58:48.811095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.369 passed 00:08:01.369 Test: blockdev copy ...passed 00:08:01.369 Suite: bdevio tests on: Nvme2n1 00:08:01.369 Test: blockdev write read block ...passed 00:08:01.369 Test: blockdev write zeroes read block ...passed 00:08:01.369 Test: blockdev write zeroes read no split ...passed 00:08:01.369 Test: blockdev write zeroes read split ...passed 00:08:01.369 Test: blockdev write zeroes read split partial ...passed 00:08:01.369 Test: blockdev reset ...[2024-12-06 03:58:48.859999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:01.369 [2024-12-06 03:58:48.862679] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:01.369 passed 00:08:01.369 Test: blockdev write read 8 blocks ...passed 00:08:01.369 Test: blockdev write read size > 128k ...passed 00:08:01.369 Test: blockdev write read invalid size ...passed 00:08:01.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.369 Test: blockdev write read max offset ...passed 00:08:01.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.369 Test: blockdev writev readv 8 blocks ...passed 00:08:01.369 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.369 Test: blockdev writev readv block ...passed 00:08:01.369 Test: blockdev writev readv size > 128k ...passed 00:08:01.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.369 Test: blockdev comparev and writev ...[2024-12-06 03:58:48.868732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be634000 len:0x1000 00:08:01.369 passed 00:08:01.369 Test: blockdev nvme passthru rw ...[2024-12-06 03:58:48.868770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.369 passed 00:08:01.369 Test: blockdev nvme passthru vendor specific ...passed 00:08:01.369 Test: blockdev nvme admin passthru ...[2024-12-06 03:58:48.869392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.369 [2024-12-06 03:58:48.869411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.369 passed 00:08:01.369 Test: blockdev copy ...passed 00:08:01.369 Suite: bdevio tests on: Nvme1n1p2 00:08:01.369 Test: blockdev write read block ...passed 00:08:01.369 Test: blockdev write zeroes read block ...passed 00:08:01.369 Test: blockdev write zeroes read no split ...passed 00:08:01.369 Test: blockdev write zeroes read split ...passed 00:08:01.625 Test: blockdev write zeroes read split partial ...passed 00:08:01.625 Test: blockdev reset ...[2024-12-06 03:58:48.911312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:01.625 [2024-12-06 03:58:48.913821] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:01.625 passed 00:08:01.625 Test: blockdev write read 8 blocks ...passed 00:08:01.625 Test: blockdev write read size > 128k ...passed 00:08:01.625 Test: blockdev write read invalid size ...passed 00:08:01.625 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.625 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.626 Test: blockdev write read max offset ...passed 00:08:01.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.626 Test: blockdev writev readv 8 blocks ...passed 00:08:01.626 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.626 Test: blockdev writev readv block ...passed 00:08:01.626 Test: blockdev writev readv size > 128k ...passed 00:08:01.626 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.626 Test: blockdev comparev and writev ...[2024-12-06 03:58:48.919571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2be630000 len:0x1000 00:08:01.626 [2024-12-06 03:58:48.919617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.626 passed 00:08:01.626 Test: blockdev nvme passthru rw ...passed 00:08:01.626 Test: blockdev nvme passthru vendor specific ...passed 00:08:01.626 Test: blockdev nvme admin passthru ...passed 00:08:01.626 Test: blockdev copy ...passed 00:08:01.626 Suite: bdevio tests on: Nvme1n1p1 00:08:01.626 Test: blockdev write read block ...passed 00:08:01.626 Test: blockdev write zeroes read block ...passed 00:08:01.626 Test: blockdev write zeroes read no split ...passed 00:08:01.626 Test: blockdev write zeroes read split ...passed 00:08:01.626 Test: blockdev write zeroes read split partial ...passed 00:08:01.626 Test: blockdev reset ...[2024-12-06 03:58:48.961545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:01.626 [2024-12-06 03:58:48.964123] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:01.626 passed 00:08:01.626 Test: blockdev write read 8 blocks ...passed 00:08:01.626 Test: blockdev write read size > 128k ...passed 00:08:01.626 Test: blockdev write read invalid size ...passed 00:08:01.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.626 Test: blockdev write read max offset ...passed 00:08:01.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.626 Test: blockdev writev readv 8 blocks ...passed 00:08:01.626 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.626 Test: blockdev writev readv block ...passed 00:08:01.626 Test: blockdev writev readv size > 128k ...passed 00:08:01.626 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.626 Test: blockdev comparev and writev ...[2024-12-06 03:58:48.969934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b620e000 len:0x1000 00:08:01.626 [2024-12-06 03:58:48.969975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.626 passed 00:08:01.626 Test: blockdev nvme passthru rw ...passed 00:08:01.626 Test: blockdev nvme passthru vendor specific ...passed 00:08:01.626 Test: blockdev nvme admin passthru ...passed 00:08:01.626 Test: blockdev copy ...passed 00:08:01.626 Suite: bdevio tests on: Nvme0n1 00:08:01.626 Test: blockdev write read block ...passed 00:08:01.626 Test: blockdev write zeroes read block ...passed 00:08:01.626 Test: blockdev write zeroes read no split ...passed 00:08:01.626 Test: blockdev write zeroes read split ...passed 00:08:01.626 Test: blockdev write zeroes read split partial ...passed 00:08:01.626 Test: blockdev reset ...[2024-12-06 03:58:49.013841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:01.626 [2024-12-06 03:58:49.016323] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:01.626 passed 00:08:01.626 Test: blockdev write read 8 blocks ...passed 00:08:01.626 Test: blockdev write read size > 128k ...passed 00:08:01.626 Test: blockdev write read invalid size ...passed 00:08:01.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.626 Test: blockdev write read max offset ...passed 00:08:01.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.626 Test: blockdev writev readv 8 blocks ...passed 00:08:01.626 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.626 Test: blockdev writev readv block ...passed 00:08:01.626 Test: blockdev writev readv size > 128k ...passed 00:08:01.626 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.626 Test: blockdev comparev and writev ...[2024-12-06 03:58:49.021263] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:01.626 separate metadata which is not supported yet. 00:08:01.626 passed 00:08:01.626 Test: blockdev nvme passthru rw ...passed 00:08:01.626 Test: blockdev nvme passthru vendor specific ...[2024-12-06 03:58:49.021830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:01.626 [2024-12-06 03:58:49.021872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:01.626 passed 00:08:01.626 Test: blockdev nvme admin passthru ...passed 00:08:01.626 Test: blockdev copy ...passed 00:08:01.626 00:08:01.626 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.626 suites 7 7 n/a 0 0 00:08:01.626 tests 161 161 161 0 0 00:08:01.626 asserts 1025 1025 1025 0 n/a 00:08:01.626 00:08:01.626 Elapsed time = 1.033 seconds 00:08:01.626 0 00:08:01.626 03:58:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61424 00:08:01.626 03:58:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61424 ']' 00:08:01.626 03:58:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61424 00:08:01.626 03:58:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:01.626 03:58:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.626 03:58:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61424 00:08:01.626 killing process with pid 61424 00:08:01.626 03:58:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.626 03:58:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.626 03:58:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61424' 00:08:01.626 03:58:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61424 00:08:01.626 03:58:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61424 00:08:02.188 ************************************ 00:08:02.188 END TEST bdev_bounds 00:08:02.188 ************************************ 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:02.188 00:08:02.188 real 0m1.958s 00:08:02.188 user 0m5.046s 00:08:02.188 sys 0m0.267s 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:02.188 03:58:49 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:02.188 03:58:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:02.188 03:58:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.188 03:58:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:02.188 ************************************ 00:08:02.188 START TEST bdev_nbd 00:08:02.188 ************************************ 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61478 00:08:02.188 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:02.189 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61478 /var/tmp/spdk-nbd.sock 00:08:02.189 03:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61478 ']' 00:08:02.189 03:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:02.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:02.189 03:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.189 03:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:02.189 03:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.189 03:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:02.189 03:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:02.445 [2024-12-06 03:58:49.717141] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:08:02.445 [2024-12-06 03:58:49.717400] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.445 [2024-12-06 03:58:49.869619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.445 [2024-12-06 03:58:49.951327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.008 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:03.265 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:03.265 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:03.265 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:03.265 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:03.265 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.265 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.265 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.265 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:03.266 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.266 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.266 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.266 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.266 1+0 records in 00:08:03.266 1+0 records out 00:08:03.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121134 s, 3.4 MB/s 00:08:03.266 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.266 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.266 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.266 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.266 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.266 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.266 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.266 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:03.522 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:03.522 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:03.523 03:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:03.523 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:03.523 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.523 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.523 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.523 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:03.523 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.523 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.523 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.523 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.523 1+0 records in 00:08:03.523 1+0 records out 00:08:03.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676931 s, 6.1 MB/s 00:08:03.523 03:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.523 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.523 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.523 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.523 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.523 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.523 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.523 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:03.779 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:03.779 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:03.779 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:03.779 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:03.779 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.779 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.779 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.779 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:03.779 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.779 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.779 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.779 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.779 1+0 records in 00:08:03.779 1+0 records out 00:08:03.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655749 s, 6.2 MB/s 00:08:03.780 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.780 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.780 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.780 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.780 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.780 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.780 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.780 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:04.037 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:04.037 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:04.037 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:04.037 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:04.037 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:04.037 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:04.037 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:04.037 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:04.307 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:04.307 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:04.307 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:04.307 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.307 1+0 records in 00:08:04.307 1+0 records out 00:08:04.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000885257 s, 4.6 MB/s 00:08:04.307 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.307 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.308 1+0 records in 00:08:04.308 1+0 records out 00:08:04.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050116 s, 8.2 MB/s 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.308 03:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.567 1+0 records in 00:08:04.567 1+0 records out 00:08:04.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449228 s, 9.1 MB/s 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.567 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.825 1+0 records in 00:08:04.825 1+0 records out 00:08:04.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538982 s, 7.6 MB/s 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.825 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:05.082 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd0", 00:08:05.082 "bdev_name": "Nvme0n1" 00:08:05.082 }, 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd1", 00:08:05.082 "bdev_name": "Nvme1n1p1" 00:08:05.082 }, 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd2", 00:08:05.082 "bdev_name": "Nvme1n1p2" 00:08:05.082 }, 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd3", 00:08:05.082 "bdev_name": "Nvme2n1" 00:08:05.082 }, 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd4", 00:08:05.082 "bdev_name": "Nvme2n2" 00:08:05.082 }, 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd5", 00:08:05.082 "bdev_name": "Nvme2n3" 00:08:05.082 }, 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd6", 00:08:05.082 "bdev_name": "Nvme3n1" 00:08:05.082 } 00:08:05.082 ]' 00:08:05.082 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:05.082 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd0", 00:08:05.082 "bdev_name": "Nvme0n1" 00:08:05.082 }, 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd1", 00:08:05.082 "bdev_name": "Nvme1n1p1" 00:08:05.082 }, 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd2", 00:08:05.082 "bdev_name": "Nvme1n1p2" 00:08:05.082 }, 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd3", 00:08:05.082 "bdev_name": "Nvme2n1" 00:08:05.082 }, 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd4", 00:08:05.082 "bdev_name": "Nvme2n2" 00:08:05.082 }, 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd5", 00:08:05.082 "bdev_name": "Nvme2n3" 00:08:05.082 }, 00:08:05.082 { 00:08:05.082 "nbd_device": "/dev/nbd6", 00:08:05.082 "bdev_name": "Nvme3n1" 00:08:05.082 } 00:08:05.082 ]' 00:08:05.082 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:05.082 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:05.082 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.082 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:05.082 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:05.082 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:05.082 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.082 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:05.339 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:05.339 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:05.339 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:05.339 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.339 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.339 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:05.339 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.339 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.339 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.339 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:05.596 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:05.596 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:05.596 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:05.596 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.596 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.596 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:05.596 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.596 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.596 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.596 03:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:05.596 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:05.596 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:05.596 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:05.596 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.596 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.597 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:05.597 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.597 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.597 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.597 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:05.854 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:05.854 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:05.854 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:05.854 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.854 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.854 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:05.854 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.854 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.854 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.854 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:06.112 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:06.112 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:06.112 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:06.112 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.112 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.112 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:06.112 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.112 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.112 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.112 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:06.370 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:06.370 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:06.370 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:06.370 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.370 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.370 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:06.370 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.370 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.370 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.370 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:06.651 03:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:06.651 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:06.651 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:06.651 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.651 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.651 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:06.651 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.651 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.651 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.651 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.651 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:06.909 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:07.167 /dev/nbd0 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.167 1+0 records in 00:08:07.167 1+0 records out 00:08:07.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641168 s, 6.4 MB/s 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.167 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:07.427 /dev/nbd1 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.427 1+0 records in 00:08:07.427 1+0 records out 00:08:07.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448981 s, 9.1 MB/s 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.427 03:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:07.692 /dev/nbd10 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.692 1+0 records in 00:08:07.692 1+0 records out 00:08:07.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000986192 s, 4.2 MB/s 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.692 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:07.950 /dev/nbd11 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.950 1+0 records in 00:08:07.950 1+0 records out 00:08:07.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458748 s, 8.9 MB/s 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.950 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:07.950 /dev/nbd12 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.209 1+0 records in 00:08:08.209 1+0 records out 00:08:08.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405529 s, 10.1 MB/s 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:08.209 /dev/nbd13 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.209 1+0 records in 00:08:08.209 1+0 records out 00:08:08.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468491 s, 8.7 MB/s 00:08:08.209 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.210 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.210 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.210 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.210 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.210 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.210 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.210 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:08.468 /dev/nbd14 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.468 1+0 records in 00:08:08.468 1+0 records out 00:08:08.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330456 s, 12.4 MB/s 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.468 03:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd0", 00:08:08.726 "bdev_name": "Nvme0n1" 00:08:08.726 }, 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd1", 00:08:08.726 "bdev_name": "Nvme1n1p1" 00:08:08.726 }, 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd10", 00:08:08.726 "bdev_name": "Nvme1n1p2" 00:08:08.726 }, 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd11", 00:08:08.726 "bdev_name": "Nvme2n1" 00:08:08.726 }, 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd12", 00:08:08.726 "bdev_name": "Nvme2n2" 00:08:08.726 }, 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd13", 00:08:08.726 "bdev_name": "Nvme2n3" 00:08:08.726 }, 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd14", 00:08:08.726 "bdev_name": "Nvme3n1" 00:08:08.726 } 00:08:08.726 ]' 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd0", 00:08:08.726 "bdev_name": "Nvme0n1" 00:08:08.726 }, 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd1", 00:08:08.726 "bdev_name": "Nvme1n1p1" 00:08:08.726 }, 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd10", 00:08:08.726 "bdev_name": "Nvme1n1p2" 00:08:08.726 }, 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd11", 00:08:08.726 "bdev_name": "Nvme2n1" 00:08:08.726 }, 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd12", 00:08:08.726 "bdev_name": "Nvme2n2" 00:08:08.726 }, 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd13", 00:08:08.726 "bdev_name": "Nvme2n3" 00:08:08.726 }, 00:08:08.726 { 00:08:08.726 "nbd_device": "/dev/nbd14", 00:08:08.726 "bdev_name": "Nvme3n1" 00:08:08.726 } 00:08:08.726 ]' 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:08.726 /dev/nbd1 00:08:08.726 /dev/nbd10 00:08:08.726 /dev/nbd11 00:08:08.726 /dev/nbd12 00:08:08.726 /dev/nbd13 00:08:08.726 /dev/nbd14' 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:08.726 /dev/nbd1 00:08:08.726 /dev/nbd10 00:08:08.726 /dev/nbd11 00:08:08.726 /dev/nbd12 00:08:08.726 /dev/nbd13 00:08:08.726 /dev/nbd14' 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:08.726 256+0 records in 00:08:08.726 256+0 records out 00:08:08.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00732417 s, 143 MB/s 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.726 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:09.051 256+0 records in 00:08:09.051 256+0 records out 00:08:09.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0709866 s, 14.8 MB/s 00:08:09.051 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.051 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:09.051 256+0 records in 00:08:09.051 256+0 records out 00:08:09.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0760032 s, 13.8 MB/s 00:08:09.051 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.051 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:09.051 256+0 records in 00:08:09.051 256+0 records out 00:08:09.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0742017 s, 14.1 MB/s 00:08:09.051 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.051 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:09.051 256+0 records in 00:08:09.051 256+0 records out 00:08:09.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0727165 s, 14.4 MB/s 00:08:09.051 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.051 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:09.324 256+0 records in 00:08:09.324 256+0 records out 00:08:09.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0730154 s, 14.4 MB/s 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:09.324 256+0 records in 00:08:09.324 256+0 records out 00:08:09.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0725747 s, 14.4 MB/s 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:09.324 256+0 records in 00:08:09.324 256+0 records out 00:08:09.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0737922 s, 14.2 MB/s 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.324 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:09.581 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:09.581 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:09.581 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:09.581 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.581 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.581 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:09.581 03:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.581 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.581 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.581 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.838 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:10.095 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:10.095 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:10.095 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:10.095 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.095 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.095 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:10.095 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.095 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.095 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.096 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:10.353 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:10.353 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:10.353 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:10.353 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.353 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.353 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:10.353 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.353 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.353 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.353 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:10.610 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:10.610 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:10.610 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:10.610 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.610 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.610 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:10.610 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.610 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.610 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.610 03:58:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:10.868 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:10.868 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:10.868 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:10.868 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.868 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.868 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:10.868 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.868 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.868 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:10.868 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.868 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:11.127 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:11.385 malloc_lvol_verify 00:08:11.385 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:11.386 bb2305e0-f892-4674-9ce6-4d10cf76d587 00:08:11.386 03:58:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:11.644 c97b2bb4-6ac3-49bf-8c3d-62ffcdd75fd6 00:08:11.644 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:11.901 /dev/nbd0 00:08:11.901 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:11.901 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:11.901 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:11.901 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:11.901 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:11.901 mke2fs 1.47.0 (5-Feb-2023) 00:08:11.901 Discarding device blocks: 0/4096 done 00:08:11.901 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:11.901 00:08:11.901 Allocating group tables: 0/1 done 00:08:11.901 Writing inode tables: 0/1 done 00:08:11.901 Creating journal (1024 blocks): done 00:08:11.901 Writing superblocks and filesystem accounting information: 0/1 done 00:08:11.901 00:08:11.901 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:11.901 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.901 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:11.901 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:11.901 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:11.901 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.901 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61478 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61478 ']' 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61478 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61478 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.160 killing process with pid 61478 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61478' 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61478 00:08:12.160 03:58:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61478 00:08:12.725 03:59:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:12.726 00:08:12.726 real 0m10.495s 00:08:12.726 user 0m15.126s 00:08:12.726 sys 0m3.489s 00:08:12.726 03:59:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.726 ************************************ 00:08:12.726 END TEST bdev_nbd 00:08:12.726 ************************************ 00:08:12.726 03:59:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:12.726 03:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:12.726 03:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:08:12.726 skipping fio tests on NVMe due to multi-ns failures. 00:08:12.726 03:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:08:12.726 03:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:12.726 03:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:12.726 03:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:12.726 03:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:12.726 03:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.726 03:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:12.726 ************************************ 00:08:12.726 START TEST bdev_verify 00:08:12.726 ************************************ 00:08:12.726 03:59:00 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:12.984 [2024-12-06 03:59:00.274751] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:08:12.984 [2024-12-06 03:59:00.274877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61888 ] 00:08:12.984 [2024-12-06 03:59:00.433198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:13.242 [2024-12-06 03:59:00.531998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.242 [2024-12-06 03:59:00.532088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.808 Running I/O for 5 seconds... 00:08:16.124 20608.00 IOPS, 80.50 MiB/s [2024-12-06T03:59:04.587Z] 21376.00 IOPS, 83.50 MiB/s [2024-12-06T03:59:05.521Z] 22208.00 IOPS, 86.75 MiB/s [2024-12-06T03:59:06.454Z] 22800.00 IOPS, 89.06 MiB/s [2024-12-06T03:59:06.454Z] 22796.80 IOPS, 89.05 MiB/s 00:08:18.927 Latency(us) 00:08:18.927 [2024-12-06T03:59:06.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.927 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x0 length 0xbd0bd 00:08:18.927 Nvme0n1 : 5.08 1614.15 6.31 0.00 0.00 79020.46 14317.10 87919.06 00:08:18.927 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:18.927 Nvme0n1 : 5.06 1594.65 6.23 0.00 0.00 80014.98 16031.11 84692.68 00:08:18.927 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x0 length 0x4ff80 00:08:18.927 Nvme1n1p1 : 5.08 1613.61 6.30 0.00 0.00 78876.09 15325.34 78643.20 00:08:18.927 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:18.927 Nvme1n1p1 : 5.06 1594.20 6.23 0.00 0.00 79860.66 18753.38 74206.92 00:08:18.927 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x0 length 0x4ff7f 00:08:18.927 Nvme1n1p2 : 5.08 1612.57 6.30 0.00 0.00 78592.02 16837.71 68964.04 00:08:18.927 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:18.927 Nvme1n1p2 : 5.06 1593.69 6.23 0.00 0.00 79744.51 17140.18 67754.14 00:08:18.927 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x0 length 0x80000 00:08:18.927 Nvme2n1 : 5.08 1611.54 6.30 0.00 0.00 78419.39 16636.06 67754.14 00:08:18.927 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x80000 length 0x80000 00:08:18.927 Nvme2n1 : 5.06 1593.24 6.22 0.00 0.00 79605.12 16434.41 67754.14 00:08:18.927 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x0 length 0x80000 00:08:18.927 Nvme2n2 : 5.10 1619.77 6.33 0.00 0.00 77950.21 3604.48 66544.25 00:08:18.927 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x80000 length 0x80000 00:08:18.927 Nvme2n2 : 5.08 1600.03 6.25 0.00 0.00 79088.83 3037.34 68157.44 00:08:18.927 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x0 length 0x80000 00:08:18.927 Nvme2n3 : 5.11 1628.36 6.36 0.00 0.00 77457.07 10082.46 69770.63 00:08:18.927 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x80000 length 0x80000 00:08:18.927 Nvme2n3 : 5.09 1608.60 6.28 0.00 0.00 78624.00 8418.86 70173.93 00:08:18.927 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x0 length 0x20000 00:08:18.927 Nvme3n1 : 5.11 1627.94 6.36 0.00 0.00 77386.35 8822.15 72593.72 00:08:18.927 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:18.927 Verification LBA range: start 0x20000 length 0x20000 00:08:18.927 Nvme3n1 : 5.09 1608.15 6.28 0.00 0.00 78478.02 8771.74 72190.42 00:08:18.927 [2024-12-06T03:59:06.454Z] =================================================================================================================== 00:08:18.927 [2024-12-06T03:59:06.454Z] Total : 22520.49 87.97 0.00 0.00 78785.89 3037.34 87919.06 00:08:20.299 00:08:20.299 real 0m7.287s 00:08:20.299 user 0m13.673s 00:08:20.299 sys 0m0.229s 00:08:20.299 03:59:07 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.299 03:59:07 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:20.299 ************************************ 00:08:20.299 END TEST bdev_verify 00:08:20.299 ************************************ 00:08:20.299 03:59:07 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:20.299 03:59:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:20.299 03:59:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.299 03:59:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.299 ************************************ 00:08:20.299 START TEST bdev_verify_big_io 00:08:20.299 ************************************ 00:08:20.299 03:59:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:20.299 [2024-12-06 03:59:07.623542] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:08:20.299 [2024-12-06 03:59:07.623667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61986 ] 00:08:20.299 [2024-12-06 03:59:07.784773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:20.557 [2024-12-06 03:59:07.887330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.557 [2024-12-06 03:59:07.887424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.124 Running I/O for 5 seconds... 00:08:25.303 1765.00 IOPS, 110.31 MiB/s [2024-12-06T03:59:14.723Z] 2101.00 IOPS, 131.31 MiB/s [2024-12-06T03:59:14.985Z] 2414.33 IOPS, 150.90 MiB/s 00:08:27.458 Latency(us) 00:08:27.458 [2024-12-06T03:59:14.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.458 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x0 length 0xbd0b 00:08:27.458 Nvme0n1 : 6.00 101.55 6.35 0.00 0.00 1190007.27 11393.18 1342177.28 00:08:27.458 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:27.458 Nvme0n1 : 5.84 105.15 6.57 0.00 0.00 1161682.25 16938.54 1077613.49 00:08:27.458 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x0 length 0x4ff8 00:08:27.458 Nvme1n1p1 : 6.00 103.19 6.45 0.00 0.00 1141297.51 104857.60 1135688.47 00:08:27.458 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:27.458 Nvme1n1p1 : 5.84 104.96 6.56 0.00 0.00 1126991.89 83482.78 1064707.94 00:08:27.458 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x0 length 0x4ff7 00:08:27.458 Nvme1n1p2 : 6.00 106.65 6.67 0.00 0.00 1080952.36 93968.54 974369.08 00:08:27.458 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:27.458 Nvme1n1p2 : 5.84 109.54 6.85 0.00 0.00 1072142.49 156479.80 1090519.04 00:08:27.458 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x0 length 0x8000 00:08:27.458 Nvme2n1 : 6.03 103.74 6.48 0.00 0.00 1079399.50 16232.76 1974549.27 00:08:27.458 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x8000 length 0x8000 00:08:27.458 Nvme2n1 : 5.85 109.47 6.84 0.00 0.00 1041886.92 154060.01 1109877.37 00:08:27.458 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x0 length 0x8000 00:08:27.458 Nvme2n2 : 6.05 108.57 6.79 0.00 0.00 994741.49 15123.69 1793871.56 00:08:27.458 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x8000 length 0x8000 00:08:27.458 Nvme2n2 : 5.98 117.78 7.36 0.00 0.00 951666.07 35086.97 1116330.14 00:08:27.458 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x0 length 0x8000 00:08:27.458 Nvme2n3 : 6.07 116.34 7.27 0.00 0.00 900471.85 14821.22 1832588.21 00:08:27.458 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x8000 length 0x8000 00:08:27.458 Nvme2n3 : 5.99 124.19 7.76 0.00 0.00 881798.94 5343.70 1122782.92 00:08:27.458 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x0 length 0x2000 00:08:27.458 Nvme3n1 : 6.17 170.12 10.63 0.00 0.00 600468.19 201.65 1858399.31 00:08:27.458 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.458 Verification LBA range: start 0x2000 length 0x2000 00:08:27.458 Nvme3n1 : 5.99 128.15 8.01 0.00 0.00 830883.64 3377.62 1135688.47 00:08:27.458 [2024-12-06T03:59:14.986Z] =================================================================================================================== 00:08:27.459 [2024-12-06T03:59:14.986Z] Total : 1609.41 100.59 0.00 0.00 980279.19 201.65 1974549.27 00:08:28.826 00:08:28.827 real 0m8.791s 00:08:28.827 user 0m16.648s 00:08:28.827 sys 0m0.237s 00:08:28.827 03:59:16 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.827 03:59:16 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:28.827 ************************************ 00:08:28.827 END TEST bdev_verify_big_io 00:08:28.827 ************************************ 00:08:29.084 03:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.084 03:59:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:29.084 03:59:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.084 03:59:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:29.084 ************************************ 00:08:29.084 START TEST bdev_write_zeroes 00:08:29.084 ************************************ 00:08:29.084 03:59:16 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.084 [2024-12-06 03:59:16.456426] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:08:29.084 [2024-12-06 03:59:16.456542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62095 ] 00:08:29.343 [2024-12-06 03:59:16.616430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.343 [2024-12-06 03:59:16.716340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.927 Running I/O for 1 seconds... 00:08:31.118 3400.00 IOPS, 13.28 MiB/s 00:08:31.118 Latency(us) 00:08:31.118 [2024-12-06T03:59:18.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.118 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.118 Nvme0n1 : 1.28 256.94 1.00 0.00 0.00 468976.02 14216.27 948557.98 00:08:31.118 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.118 Nvme1n1p1 : 1.28 601.02 2.35 0.00 0.00 212376.22 10687.41 412977.62 00:08:31.118 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.118 Nvme1n1p2 : 1.28 500.35 1.95 0.00 0.00 254438.56 11443.59 412977.62 00:08:31.118 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.118 Nvme2n1 : 1.28 499.89 1.95 0.00 0.00 254024.15 11494.01 411364.43 00:08:31.118 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.118 Nvme2n2 : 1.28 499.44 1.95 0.00 0.00 253635.90 11494.01 411364.43 00:08:31.118 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.118 Nvme2n3 : 1.28 498.99 1.95 0.00 0.00 253327.52 11494.01 411364.43 00:08:31.118 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.118 Nvme3n1 : 1.28 498.53 1.95 0.00 0.00 252981.41 11494.01 411364.43 00:08:31.118 [2024-12-06T03:59:18.645Z] =================================================================================================================== 00:08:31.118 [2024-12-06T03:59:18.645Z] Total : 3355.17 13.11 0.00 0.00 262735.08 10687.41 948557.98 00:08:32.051 00:08:32.051 real 0m2.940s 00:08:32.051 user 0m2.646s 00:08:32.051 sys 0m0.181s 00:08:32.051 03:59:19 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.051 03:59:19 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:32.051 ************************************ 00:08:32.051 END TEST bdev_write_zeroes 00:08:32.051 ************************************ 00:08:32.051 03:59:19 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:32.051 03:59:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:32.051 03:59:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.051 03:59:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:32.051 ************************************ 00:08:32.051 START TEST bdev_json_nonenclosed 00:08:32.051 ************************************ 00:08:32.051 03:59:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:32.051 [2024-12-06 03:59:19.437613] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:08:32.051 [2024-12-06 03:59:19.437745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62148 ] 00:08:32.308 [2024-12-06 03:59:19.595411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.308 [2024-12-06 03:59:19.694081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.308 [2024-12-06 03:59:19.694158] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:32.308 [2024-12-06 03:59:19.694174] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:32.308 [2024-12-06 03:59:19.694184] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.565 00:08:32.565 real 0m0.500s 00:08:32.565 user 0m0.300s 00:08:32.565 sys 0m0.096s 00:08:32.565 03:59:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.565 03:59:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:32.565 ************************************ 00:08:32.565 END TEST bdev_json_nonenclosed 00:08:32.565 ************************************ 00:08:32.565 03:59:19 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:32.565 03:59:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:32.565 03:59:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.565 03:59:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:32.565 ************************************ 00:08:32.565 START TEST bdev_json_nonarray 00:08:32.565 ************************************ 00:08:32.565 03:59:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:32.565 [2024-12-06 03:59:19.977620] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:08:32.565 [2024-12-06 03:59:19.977785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62179 ] 00:08:32.822 [2024-12-06 03:59:20.140041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.822 [2024-12-06 03:59:20.240117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.822 [2024-12-06 03:59:20.240204] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:32.822 [2024-12-06 03:59:20.240222] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:32.822 [2024-12-06 03:59:20.240231] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.080 00:08:33.080 real 0m0.505s 00:08:33.080 user 0m0.303s 00:08:33.080 sys 0m0.099s 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:33.080 ************************************ 00:08:33.080 END TEST bdev_json_nonarray 00:08:33.080 ************************************ 00:08:33.080 03:59:20 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:33.080 03:59:20 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:33.080 03:59:20 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:33.080 03:59:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.080 03:59:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.080 03:59:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:33.080 ************************************ 00:08:33.080 START TEST bdev_gpt_uuid 00:08:33.080 ************************************ 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62199 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62199 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62199 ']' 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:33.080 03:59:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:33.080 [2024-12-06 03:59:20.541647] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:08:33.080 [2024-12-06 03:59:20.541781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62199 ] 00:08:33.344 [2024-12-06 03:59:20.700756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.344 [2024-12-06 03:59:20.800822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.910 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.910 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:33.910 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:33.910 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.910 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 Some configs were skipped because the RPC state that can call them passed over. 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:34.497 { 00:08:34.497 "name": "Nvme1n1p1", 00:08:34.497 "aliases": [ 00:08:34.497 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:34.497 ], 00:08:34.497 "product_name": "GPT Disk", 00:08:34.497 "block_size": 4096, 00:08:34.497 "num_blocks": 655104, 00:08:34.497 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:34.497 "assigned_rate_limits": { 00:08:34.497 "rw_ios_per_sec": 0, 00:08:34.497 "rw_mbytes_per_sec": 0, 00:08:34.497 "r_mbytes_per_sec": 0, 00:08:34.497 "w_mbytes_per_sec": 0 00:08:34.497 }, 00:08:34.497 "claimed": false, 00:08:34.497 "zoned": false, 00:08:34.497 "supported_io_types": { 00:08:34.497 "read": true, 00:08:34.497 "write": true, 00:08:34.497 "unmap": true, 00:08:34.497 "flush": true, 00:08:34.497 "reset": true, 00:08:34.497 "nvme_admin": false, 00:08:34.497 "nvme_io": false, 00:08:34.497 "nvme_io_md": false, 00:08:34.497 "write_zeroes": true, 00:08:34.497 "zcopy": false, 00:08:34.497 "get_zone_info": false, 00:08:34.497 "zone_management": false, 00:08:34.497 "zone_append": false, 00:08:34.497 "compare": true, 00:08:34.497 "compare_and_write": false, 00:08:34.497 "abort": true, 00:08:34.497 "seek_hole": false, 00:08:34.497 "seek_data": false, 00:08:34.497 "copy": true, 00:08:34.497 "nvme_iov_md": false 00:08:34.497 }, 00:08:34.497 "driver_specific": { 00:08:34.497 "gpt": { 00:08:34.497 "base_bdev": "Nvme1n1", 00:08:34.497 "offset_blocks": 256, 00:08:34.497 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:34.497 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:34.497 "partition_name": "SPDK_TEST_first" 00:08:34.497 } 00:08:34.497 } 00:08:34.497 } 00:08:34.497 ]' 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.497 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:34.497 { 00:08:34.497 "name": "Nvme1n1p2", 00:08:34.497 "aliases": [ 00:08:34.497 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:34.497 ], 00:08:34.497 "product_name": "GPT Disk", 00:08:34.497 "block_size": 4096, 00:08:34.497 "num_blocks": 655103, 00:08:34.497 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:34.497 "assigned_rate_limits": { 00:08:34.497 "rw_ios_per_sec": 0, 00:08:34.497 "rw_mbytes_per_sec": 0, 00:08:34.497 "r_mbytes_per_sec": 0, 00:08:34.497 "w_mbytes_per_sec": 0 00:08:34.497 }, 00:08:34.497 "claimed": false, 00:08:34.497 "zoned": false, 00:08:34.497 "supported_io_types": { 00:08:34.497 "read": true, 00:08:34.497 "write": true, 00:08:34.497 "unmap": true, 00:08:34.497 "flush": true, 00:08:34.497 "reset": true, 00:08:34.497 "nvme_admin": false, 00:08:34.497 "nvme_io": false, 00:08:34.497 "nvme_io_md": false, 00:08:34.497 "write_zeroes": true, 00:08:34.497 "zcopy": false, 00:08:34.497 "get_zone_info": false, 00:08:34.497 "zone_management": false, 00:08:34.497 "zone_append": false, 00:08:34.497 "compare": true, 00:08:34.497 "compare_and_write": false, 00:08:34.497 "abort": true, 00:08:34.497 "seek_hole": false, 00:08:34.497 "seek_data": false, 00:08:34.497 "copy": true, 00:08:34.497 "nvme_iov_md": false 00:08:34.497 }, 00:08:34.497 "driver_specific": { 00:08:34.497 "gpt": { 00:08:34.497 "base_bdev": "Nvme1n1", 00:08:34.497 "offset_blocks": 655360, 00:08:34.498 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:34.498 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:34.498 "partition_name": "SPDK_TEST_second" 00:08:34.498 } 00:08:34.498 } 00:08:34.498 } 00:08:34.498 ]' 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62199 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62199 ']' 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62199 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62199 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.498 killing process with pid 62199 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62199' 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62199 00:08:34.498 03:59:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62199 00:08:36.394 00:08:36.394 real 0m2.957s 00:08:36.394 user 0m3.094s 00:08:36.394 sys 0m0.360s 00:08:36.394 03:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.394 03:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:36.394 ************************************ 00:08:36.394 END TEST bdev_gpt_uuid 00:08:36.394 ************************************ 00:08:36.394 03:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:36.394 03:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:36.394 03:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:36.394 03:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:36.394 03:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:36.394 03:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:36.394 03:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:36.394 03:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:36.394 03:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:36.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.394 Waiting for block devices as requested 00:08:36.394 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.652 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.652 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.652 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:41.914 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:41.914 03:59:29 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:41.914 03:59:29 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:41.914 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:41.914 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:41.914 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:41.914 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:41.914 03:59:29 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:41.914 00:08:41.914 real 0m54.945s 00:08:41.914 user 1m10.588s 00:08:41.914 sys 0m7.601s 00:08:41.914 03:59:29 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.914 03:59:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:41.914 ************************************ 00:08:41.915 END TEST blockdev_nvme_gpt 00:08:41.915 ************************************ 00:08:41.915 03:59:29 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:41.915 03:59:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.915 03:59:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.915 03:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:41.915 ************************************ 00:08:41.915 START TEST nvme 00:08:41.915 ************************************ 00:08:41.915 03:59:29 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:42.173 * Looking for test storage... 00:08:42.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:42.173 03:59:29 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:42.173 03:59:29 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:08:42.173 03:59:29 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:42.173 03:59:29 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:42.173 03:59:29 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.173 03:59:29 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.173 03:59:29 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.173 03:59:29 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.173 03:59:29 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.173 03:59:29 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.173 03:59:29 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.173 03:59:29 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.173 03:59:29 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.173 03:59:29 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.173 03:59:29 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.173 03:59:29 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:42.173 03:59:29 nvme -- scripts/common.sh@345 -- # : 1 00:08:42.173 03:59:29 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.173 03:59:29 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.173 03:59:29 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:42.173 03:59:29 nvme -- scripts/common.sh@353 -- # local d=1 00:08:42.173 03:59:29 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.173 03:59:29 nvme -- scripts/common.sh@355 -- # echo 1 00:08:42.173 03:59:29 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.173 03:59:29 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:42.173 03:59:29 nvme -- scripts/common.sh@353 -- # local d=2 00:08:42.173 03:59:29 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.173 03:59:29 nvme -- scripts/common.sh@355 -- # echo 2 00:08:42.173 03:59:29 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.173 03:59:29 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.173 03:59:29 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.173 03:59:29 nvme -- scripts/common.sh@368 -- # return 0 00:08:42.173 03:59:29 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.173 03:59:29 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:42.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.173 --rc genhtml_branch_coverage=1 00:08:42.173 --rc genhtml_function_coverage=1 00:08:42.173 --rc genhtml_legend=1 00:08:42.173 --rc geninfo_all_blocks=1 00:08:42.173 --rc geninfo_unexecuted_blocks=1 00:08:42.173 00:08:42.173 ' 00:08:42.173 03:59:29 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:42.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.173 --rc genhtml_branch_coverage=1 00:08:42.173 --rc genhtml_function_coverage=1 00:08:42.173 --rc genhtml_legend=1 00:08:42.173 --rc geninfo_all_blocks=1 00:08:42.173 --rc geninfo_unexecuted_blocks=1 00:08:42.173 00:08:42.173 ' 00:08:42.173 03:59:29 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:42.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.174 --rc genhtml_branch_coverage=1 00:08:42.174 --rc genhtml_function_coverage=1 00:08:42.174 --rc genhtml_legend=1 00:08:42.174 --rc geninfo_all_blocks=1 00:08:42.174 --rc geninfo_unexecuted_blocks=1 00:08:42.174 00:08:42.174 ' 00:08:42.174 03:59:29 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:42.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.174 --rc genhtml_branch_coverage=1 00:08:42.174 --rc genhtml_function_coverage=1 00:08:42.174 --rc genhtml_legend=1 00:08:42.174 --rc geninfo_all_blocks=1 00:08:42.174 --rc geninfo_unexecuted_blocks=1 00:08:42.174 00:08:42.174 ' 00:08:42.174 03:59:29 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:42.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:42.999 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:42.999 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:42.999 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:42.999 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:42.999 03:59:30 nvme -- nvme/nvme.sh@79 -- # uname 00:08:42.999 03:59:30 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:42.999 03:59:30 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:42.999 03:59:30 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:42.999 03:59:30 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:42.999 03:59:30 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:43.260 03:59:30 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:43.260 03:59:30 nvme -- common/autotest_common.sh@1075 -- # stubpid=62835 00:08:43.260 Waiting for stub to ready for secondary processes... 00:08:43.260 03:59:30 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:43.260 03:59:30 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:43.260 03:59:30 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:43.260 03:59:30 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62835 ]] 00:08:43.260 03:59:30 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:43.260 [2024-12-06 03:59:30.558227] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:08:43.260 [2024-12-06 03:59:30.558346] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:43.833 [2024-12-06 03:59:31.342158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:44.093 [2024-12-06 03:59:31.439613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.093 [2024-12-06 03:59:31.439891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.093 [2024-12-06 03:59:31.439908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.093 [2024-12-06 03:59:31.453184] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:44.094 [2024-12-06 03:59:31.453219] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:44.094 [2024-12-06 03:59:31.465066] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:44.094 [2024-12-06 03:59:31.465153] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:44.094 [2024-12-06 03:59:31.468511] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:44.094 [2024-12-06 03:59:31.468843] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:44.094 [2024-12-06 03:59:31.468964] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:44.094 [2024-12-06 03:59:31.472768] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:44.094 [2024-12-06 03:59:31.473037] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:44.094 [2024-12-06 03:59:31.473146] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:44.094 [2024-12-06 03:59:31.477214] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:44.094 [2024-12-06 03:59:31.477336] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:44.094 [2024-12-06 03:59:31.477380] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:44.094 [2024-12-06 03:59:31.477408] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:44.094 [2024-12-06 03:59:31.477434] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:44.094 done. 00:08:44.094 03:59:31 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:44.094 03:59:31 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:44.094 03:59:31 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:44.094 03:59:31 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:44.094 03:59:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.094 03:59:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.094 ************************************ 00:08:44.094 START TEST nvme_reset 00:08:44.094 ************************************ 00:08:44.094 03:59:31 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:44.355 Initializing NVMe Controllers 00:08:44.355 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:44.355 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:44.355 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:44.355 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:44.355 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:44.355 00:08:44.355 real 0m0.210s 00:08:44.355 user 0m0.067s 00:08:44.355 sys 0m0.101s 00:08:44.355 03:59:31 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.355 03:59:31 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:44.355 ************************************ 00:08:44.355 END TEST nvme_reset 00:08:44.355 ************************************ 00:08:44.355 03:59:31 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:44.355 03:59:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.355 03:59:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.355 03:59:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.355 ************************************ 00:08:44.355 START TEST nvme_identify 00:08:44.355 ************************************ 00:08:44.355 03:59:31 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:44.355 03:59:31 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:44.355 03:59:31 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:44.355 03:59:31 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:44.355 03:59:31 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:44.355 03:59:31 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:44.355 03:59:31 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:44.355 03:59:31 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:44.355 03:59:31 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:44.355 03:59:31 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:44.355 03:59:31 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:44.355 03:59:31 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:44.355 03:59:31 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:44.619 [2024-12-06 03:59:32.020524] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62856 terminated unexpected 00:08:44.619 ===================================================== 00:08:44.619 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.619 ===================================================== 00:08:44.619 Controller Capabilities/Features 00:08:44.619 ================================ 00:08:44.619 Vendor ID: 1b36 00:08:44.619 Subsystem Vendor ID: 1af4 00:08:44.619 Serial Number: 12340 00:08:44.619 Model Number: QEMU NVMe Ctrl 00:08:44.619 Firmware Version: 8.0.0 00:08:44.619 Recommended Arb Burst: 6 00:08:44.619 IEEE OUI Identifier: 00 54 52 00:08:44.619 Multi-path I/O 00:08:44.619 May have multiple subsystem ports: No 00:08:44.619 May have multiple controllers: No 00:08:44.619 Associated with SR-IOV VF: No 00:08:44.619 Max Data Transfer Size: 524288 00:08:44.619 Max Number of Namespaces: 256 00:08:44.619 Max Number of I/O Queues: 64 00:08:44.619 NVMe Specification Version (VS): 1.4 00:08:44.619 NVMe Specification Version (Identify): 1.4 00:08:44.619 Maximum Queue Entries: 2048 00:08:44.619 Contiguous Queues Required: Yes 00:08:44.619 Arbitration Mechanisms Supported 00:08:44.619 Weighted Round Robin: Not Supported 00:08:44.619 Vendor Specific: Not Supported 00:08:44.619 Reset Timeout: 7500 ms 00:08:44.619 Doorbell Stride: 4 bytes 00:08:44.619 NVM Subsystem Reset: Not Supported 00:08:44.619 Command Sets Supported 00:08:44.619 NVM Command Set: Supported 00:08:44.619 Boot Partition: Not Supported 00:08:44.619 Memory Page Size Minimum: 4096 bytes 00:08:44.619 Memory Page Size Maximum: 65536 bytes 00:08:44.619 Persistent Memory Region: Not Supported 00:08:44.619 Optional Asynchronous Events Supported 00:08:44.619 Namespace Attribute Notices: Supported 00:08:44.619 Firmware Activation Notices: Not Supported 00:08:44.619 ANA Change Notices: Not Supported 00:08:44.619 PLE Aggregate Log Change Notices: Not Supported 00:08:44.619 LBA Status Info Alert Notices: Not Supported 00:08:44.619 EGE Aggregate Log Change Notices: Not Supported 00:08:44.619 Normal NVM Subsystem Shutdown event: Not Supported 00:08:44.619 Zone Descriptor Change Notices: Not Supported 00:08:44.619 Discovery Log Change Notices: Not Supported 00:08:44.619 Controller Attributes 00:08:44.619 128-bit Host Identifier: Not Supported 00:08:44.619 Non-Operational Permissive Mode: Not Supported 00:08:44.619 NVM Sets: Not Supported 00:08:44.619 Read Recovery Levels: Not Supported 00:08:44.619 Endurance Groups: Not Supported 00:08:44.619 Predictable Latency Mode: Not Supported 00:08:44.619 Traffic Based Keep ALive: Not Supported 00:08:44.619 Namespace Granularity: Not Supported 00:08:44.619 SQ Associations: Not Supported 00:08:44.619 UUID List: Not Supported 00:08:44.619 Multi-Domain Subsystem: Not Supported 00:08:44.619 Fixed Capacity Management: Not Supported 00:08:44.619 Variable Capacity Management: Not Supported 00:08:44.619 Delete Endurance Group: Not Supported 00:08:44.619 Delete NVM Set: Not Supported 00:08:44.619 Extended LBA Formats Supported: Supported 00:08:44.619 Flexible Data Placement Supported: Not Supported 00:08:44.619 00:08:44.619 Controller Memory Buffer Support 00:08:44.619 ================================ 00:08:44.620 Supported: No 00:08:44.620 00:08:44.620 Persistent Memory Region Support 00:08:44.620 ================================ 00:08:44.620 Supported: No 00:08:44.620 00:08:44.620 Admin Command Set Attributes 00:08:44.620 ============================ 00:08:44.620 Security Send/Receive: Not Supported 00:08:44.620 Format NVM: Supported 00:08:44.620 Firmware Activate/Download: Not Supported 00:08:44.620 Namespace Management: Supported 00:08:44.620 Device Self-Test: Not Supported 00:08:44.620 Directives: Supported 00:08:44.620 NVMe-MI: Not Supported 00:08:44.620 Virtualization Management: Not Supported 00:08:44.620 Doorbell Buffer Config: Supported 00:08:44.620 Get LBA Status Capability: Not Supported 00:08:44.620 Command & Feature Lockdown Capability: Not Supported 00:08:44.620 Abort Command Limit: 4 00:08:44.620 Async Event Request Limit: 4 00:08:44.620 Number of Firmware Slots: N/A 00:08:44.620 Firmware Slot 1 Read-Only: N/A 00:08:44.620 Firmware Activation Without Reset: N/A 00:08:44.620 Multiple Update Detection Support: N/A 00:08:44.620 Firmware Update Granularity: No Information Provided 00:08:44.620 Per-Namespace SMART Log: Yes 00:08:44.620 Asymmetric Namespace Access Log Page: Not Supported 00:08:44.620 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:44.620 Command Effects Log Page: Supported 00:08:44.620 Get Log Page Extended Data: Supported 00:08:44.620 Telemetry Log Pages: Not Supported 00:08:44.620 Persistent Event Log Pages: Not Supported 00:08:44.620 Supported Log Pages Log Page: May Support 00:08:44.620 Commands Supported & Effects Log Page: Not Supported 00:08:44.620 Feature Identifiers & Effects Log Page:May Support 00:08:44.620 NVMe-MI Commands & Effects Log Page: May Support 00:08:44.620 Data Area 4 for Telemetry Log: Not Supported 00:08:44.620 Error Log Page Entries Supported: 1 00:08:44.620 Keep Alive: Not Supported 00:08:44.620 00:08:44.620 NVM Command Set Attributes 00:08:44.620 ========================== 00:08:44.620 Submission Queue Entry Size 00:08:44.620 Max: 64 00:08:44.620 Min: 64 00:08:44.620 Completion Queue Entry Size 00:08:44.620 Max: 16 00:08:44.620 Min: 16 00:08:44.620 Number of Namespaces: 256 00:08:44.620 Compare Command: Supported 00:08:44.620 Write Uncorrectable Command: Not Supported 00:08:44.620 Dataset Management Command: Supported 00:08:44.620 Write Zeroes Command: Supported 00:08:44.620 Set Features Save Field: Supported 00:08:44.620 Reservations: Not Supported 00:08:44.620 Timestamp: Supported 00:08:44.620 Copy: Supported 00:08:44.620 Volatile Write Cache: Present 00:08:44.620 Atomic Write Unit (Normal): 1 00:08:44.620 Atomic Write Unit (PFail): 1 00:08:44.620 Atomic Compare & Write Unit: 1 00:08:44.620 Fused Compare & Write: Not Supported 00:08:44.620 Scatter-Gather List 00:08:44.620 SGL Command Set: Supported 00:08:44.620 SGL Keyed: Not Supported 00:08:44.620 SGL Bit Bucket Descriptor: Not Supported 00:08:44.620 SGL Metadata Pointer: Not Supported 00:08:44.620 Oversized SGL: Not Supported 00:08:44.620 SGL Metadata Address: Not Supported 00:08:44.620 SGL Offset: Not Supported 00:08:44.620 Transport SGL Data Block: Not Supported 00:08:44.620 Replay Protected Memory Block: Not Supported 00:08:44.620 00:08:44.620 Firmware Slot Information 00:08:44.620 ========================= 00:08:44.620 Active slot: 1 00:08:44.620 Slot 1 Firmware Revision: 1.0 00:08:44.620 00:08:44.620 00:08:44.620 Commands Supported and Effects 00:08:44.620 ============================== 00:08:44.620 Admin Commands 00:08:44.620 -------------- 00:08:44.620 Delete I/O Submission Queue (00h): Supported 00:08:44.620 Create I/O Submission Queue (01h): Supported 00:08:44.620 Get Log Page (02h): Supported 00:08:44.620 Delete I/O Completion Queue (04h): Supported 00:08:44.620 Create I/O Completion Queue (05h): Supported 00:08:44.620 Identify (06h): Supported 00:08:44.620 Abort (08h): Supported 00:08:44.620 Set Features (09h): Supported 00:08:44.620 Get Features (0Ah): Supported 00:08:44.620 Asynchronous Event Request (0Ch): Supported 00:08:44.620 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:44.620 Directive Send (19h): Supported 00:08:44.620 Directive Receive (1Ah): Supported 00:08:44.620 Virtualization Management (1Ch): Supported 00:08:44.620 Doorbell Buffer Config (7Ch): Supported 00:08:44.620 Format NVM (80h): Supported LBA-Change 00:08:44.620 I/O Commands 00:08:44.620 ------------ 00:08:44.620 Flush (00h): Supported LBA-Change 00:08:44.620 Write (01h): Supported LBA-Change 00:08:44.620 Read (02h): Supported 00:08:44.620 Compare (05h): Supported 00:08:44.620 Write Zeroes (08h): Supported LBA-Change 00:08:44.620 Dataset Management (09h): Supported LBA-Change 00:08:44.620 Unknown (0Ch): Supported 00:08:44.620 Unknown (12h): Supported 00:08:44.620 Copy (19h): Supported LBA-Change 00:08:44.620 Unknown (1Dh): Supported LBA-Change 00:08:44.620 00:08:44.620 Error Log 00:08:44.620 ========= 00:08:44.620 00:08:44.620 Arbitration 00:08:44.620 =========== 00:08:44.620 Arbitration Burst: no limit 00:08:44.620 00:08:44.620 Power Management 00:08:44.620 ================ 00:08:44.620 Number of Power States: 1 00:08:44.620 Current Power State: Power State #0 00:08:44.620 Power State #0: 00:08:44.620 Max Power: 25.00 W 00:08:44.620 Non-Operational State: Operational 00:08:44.620 Entry Latency: 16 microseconds 00:08:44.620 Exit Latency: 4 microseconds 00:08:44.620 Relative Read Throughput: 0 00:08:44.621 Relative Read Latency: 0 00:08:44.621 Relative Write Throughput: 0 00:08:44.621 Relative Write Latency: 0 00:08:44.621 Idle Power[2024-12-06 03:59:32.021576] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62856 terminated unexpected 00:08:44.621 : Not Reported 00:08:44.621 Active Power: Not Reported 00:08:44.621 Non-Operational Permissive Mode: Not Supported 00:08:44.621 00:08:44.621 Health Information 00:08:44.621 ================== 00:08:44.621 Critical Warnings: 00:08:44.621 Available Spare Space: OK 00:08:44.621 Temperature: OK 00:08:44.621 Device Reliability: OK 00:08:44.621 Read Only: No 00:08:44.621 Volatile Memory Backup: OK 00:08:44.621 Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.621 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:44.621 Available Spare: 0% 00:08:44.621 Available Spare Threshold: 0% 00:08:44.621 Life Percentage Used: 0% 00:08:44.621 Data Units Read: 691 00:08:44.621 Data Units Written: 619 00:08:44.621 Host Read Commands: 39617 00:08:44.621 Host Write Commands: 39403 00:08:44.621 Controller Busy Time: 0 minutes 00:08:44.621 Power Cycles: 0 00:08:44.621 Power On Hours: 0 hours 00:08:44.621 Unsafe Shutdowns: 0 00:08:44.621 Unrecoverable Media Errors: 0 00:08:44.621 Lifetime Error Log Entries: 0 00:08:44.621 Warning Temperature Time: 0 minutes 00:08:44.621 Critical Temperature Time: 0 minutes 00:08:44.621 00:08:44.621 Number of Queues 00:08:44.621 ================ 00:08:44.621 Number of I/O Submission Queues: 64 00:08:44.621 Number of I/O Completion Queues: 64 00:08:44.621 00:08:44.621 ZNS Specific Controller Data 00:08:44.621 ============================ 00:08:44.621 Zone Append Size Limit: 0 00:08:44.621 00:08:44.621 00:08:44.621 Active Namespaces 00:08:44.621 ================= 00:08:44.621 Namespace ID:1 00:08:44.621 Error Recovery Timeout: Unlimited 00:08:44.621 Command Set Identifier: NVM (00h) 00:08:44.621 Deallocate: Supported 00:08:44.621 Deallocated/Unwritten Error: Supported 00:08:44.621 Deallocated Read Value: All 0x00 00:08:44.621 Deallocate in Write Zeroes: Not Supported 00:08:44.621 Deallocated Guard Field: 0xFFFF 00:08:44.621 Flush: Supported 00:08:44.621 Reservation: Not Supported 00:08:44.621 Metadata Transferred as: Separate Metadata Buffer 00:08:44.621 Namespace Sharing Capabilities: Private 00:08:44.621 Size (in LBAs): 1548666 (5GiB) 00:08:44.621 Capacity (in LBAs): 1548666 (5GiB) 00:08:44.621 Utilization (in LBAs): 1548666 (5GiB) 00:08:44.621 Thin Provisioning: Not Supported 00:08:44.621 Per-NS Atomic Units: No 00:08:44.621 Maximum Single Source Range Length: 128 00:08:44.621 Maximum Copy Length: 128 00:08:44.621 Maximum Source Range Count: 128 00:08:44.621 NGUID/EUI64 Never Reused: No 00:08:44.621 Namespace Write Protected: No 00:08:44.621 Number of LBA Formats: 8 00:08:44.621 Current LBA Format: LBA Format #07 00:08:44.621 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:44.621 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:44.621 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:44.621 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:44.621 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:44.621 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:44.621 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:44.621 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:44.621 00:08:44.621 NVM Specific Namespace Data 00:08:44.621 =========================== 00:08:44.621 Logical Block Storage Tag Mask: 0 00:08:44.621 Protection Information Capabilities: 00:08:44.621 16b Guard Protection Information Storage Tag Support: No 00:08:44.621 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:44.621 Storage Tag Check Read Support: No 00:08:44.621 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.621 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.621 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.621 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.621 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.621 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.621 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.621 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.621 ===================================================== 00:08:44.621 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.621 ===================================================== 00:08:44.621 Controller Capabilities/Features 00:08:44.621 ================================ 00:08:44.621 Vendor ID: 1b36 00:08:44.621 Subsystem Vendor ID: 1af4 00:08:44.621 Serial Number: 12341 00:08:44.621 Model Number: QEMU NVMe Ctrl 00:08:44.621 Firmware Version: 8.0.0 00:08:44.621 Recommended Arb Burst: 6 00:08:44.621 IEEE OUI Identifier: 00 54 52 00:08:44.621 Multi-path I/O 00:08:44.621 May have multiple subsystem ports: No 00:08:44.621 May have multiple controllers: No 00:08:44.621 Associated with SR-IOV VF: No 00:08:44.621 Max Data Transfer Size: 524288 00:08:44.621 Max Number of Namespaces: 256 00:08:44.621 Max Number of I/O Queues: 64 00:08:44.621 NVMe Specification Version (VS): 1.4 00:08:44.621 NVMe Specification Version (Identify): 1.4 00:08:44.621 Maximum Queue Entries: 2048 00:08:44.621 Contiguous Queues Required: Yes 00:08:44.621 Arbitration Mechanisms Supported 00:08:44.621 Weighted Round Robin: Not Supported 00:08:44.621 Vendor Specific: Not Supported 00:08:44.621 Reset Timeout: 7500 ms 00:08:44.621 Doorbell Stride: 4 bytes 00:08:44.621 NVM Subsystem Reset: Not Supported 00:08:44.622 Command Sets Supported 00:08:44.622 NVM Command Set: Supported 00:08:44.622 Boot Partition: Not Supported 00:08:44.622 Memory Page Size Minimum: 4096 bytes 00:08:44.622 Memory Page Size Maximum: 65536 bytes 00:08:44.622 Persistent Memory Region: Not Supported 00:08:44.622 Optional Asynchronous Events Supported 00:08:44.622 Namespace Attribute Notices: Supported 00:08:44.622 Firmware Activation Notices: Not Supported 00:08:44.622 ANA Change Notices: Not Supported 00:08:44.622 PLE Aggregate Log Change Notices: Not Supported 00:08:44.622 LBA Status Info Alert Notices: Not Supported 00:08:44.622 EGE Aggregate Log Change Notices: Not Supported 00:08:44.622 Normal NVM Subsystem Shutdown event: Not Supported 00:08:44.622 Zone Descriptor Change Notices: Not Supported 00:08:44.622 Discovery Log Change Notices: Not Supported 00:08:44.622 Controller Attributes 00:08:44.622 128-bit Host Identifier: Not Supported 00:08:44.622 Non-Operational Permissive Mode: Not Supported 00:08:44.622 NVM Sets: Not Supported 00:08:44.622 Read Recovery Levels: Not Supported 00:08:44.622 Endurance Groups: Not Supported 00:08:44.622 Predictable Latency Mode: Not Supported 00:08:44.622 Traffic Based Keep ALive: Not Supported 00:08:44.622 Namespace Granularity: Not Supported 00:08:44.622 SQ Associations: Not Supported 00:08:44.622 UUID List: Not Supported 00:08:44.622 Multi-Domain Subsystem: Not Supported 00:08:44.622 Fixed Capacity Management: Not Supported 00:08:44.622 Variable Capacity Management: Not Supported 00:08:44.622 Delete Endurance Group: Not Supported 00:08:44.622 Delete NVM Set: Not Supported 00:08:44.622 Extended LBA Formats Supported: Supported 00:08:44.622 Flexible Data Placement Supported: Not Supported 00:08:44.622 00:08:44.622 Controller Memory Buffer Support 00:08:44.622 ================================ 00:08:44.622 Supported: No 00:08:44.622 00:08:44.622 Persistent Memory Region Support 00:08:44.622 ================================ 00:08:44.622 Supported: No 00:08:44.622 00:08:44.622 Admin Command Set Attributes 00:08:44.622 ============================ 00:08:44.622 Security Send/Receive: Not Supported 00:08:44.622 Format NVM: Supported 00:08:44.622 Firmware Activate/Download: Not Supported 00:08:44.622 Namespace Management: Supported 00:08:44.622 Device Self-Test: Not Supported 00:08:44.622 Directives: Supported 00:08:44.622 NVMe-MI: Not Supported 00:08:44.622 Virtualization Management: Not Supported 00:08:44.622 Doorbell Buffer Config: Supported 00:08:44.622 Get LBA Status Capability: Not Supported 00:08:44.622 Command & Feature Lockdown Capability: Not Supported 00:08:44.622 Abort Command Limit: 4 00:08:44.622 Async Event Request Limit: 4 00:08:44.622 Number of Firmware Slots: N/A 00:08:44.622 Firmware Slot 1 Read-Only: N/A 00:08:44.622 Firmware Activation Without Reset: N/A 00:08:44.622 Multiple Update Detection Support: N/A 00:08:44.622 Firmware Update Granularity: No Information Provided 00:08:44.622 Per-Namespace SMART Log: Yes 00:08:44.622 Asymmetric Namespace Access Log Page: Not Supported 00:08:44.622 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:44.622 Command Effects Log Page: Supported 00:08:44.622 Get Log Page Extended Data: Supported 00:08:44.622 Telemetry Log Pages: Not Supported 00:08:44.622 Persistent Event Log Pages: Not Supported 00:08:44.622 Supported Log Pages Log Page: May Support 00:08:44.622 Commands Supported & Effects Log Page: Not Supported 00:08:44.622 Feature Identifiers & Effects Log Page:May Support 00:08:44.622 NVMe-MI Commands & Effects Log Page: May Support 00:08:44.622 Data Area 4 for Telemetry Log: Not Supported 00:08:44.622 Error Log Page Entries Supported: 1 00:08:44.622 Keep Alive: Not Supported 00:08:44.622 00:08:44.622 NVM Command Set Attributes 00:08:44.622 ========================== 00:08:44.622 Submission Queue Entry Size 00:08:44.622 Max: 64 00:08:44.622 Min: 64 00:08:44.622 Completion Queue Entry Size 00:08:44.622 Max: 16 00:08:44.622 Min: 16 00:08:44.622 Number of Namespaces: 256 00:08:44.622 Compare Command: Supported 00:08:44.622 Write Uncorrectable Command: Not Supported 00:08:44.622 Dataset Management Command: Supported 00:08:44.622 Write Zeroes Command: Supported 00:08:44.622 Set Features Save Field: Supported 00:08:44.622 Reservations: Not Supported 00:08:44.622 Timestamp: Supported 00:08:44.622 Copy: Supported 00:08:44.622 Volatile Write Cache: Present 00:08:44.622 Atomic Write Unit (Normal): 1 00:08:44.622 Atomic Write Unit (PFail): 1 00:08:44.622 Atomic Compare & Write Unit: 1 00:08:44.622 Fused Compare & Write: Not Supported 00:08:44.622 Scatter-Gather List 00:08:44.622 SGL Command Set: Supported 00:08:44.622 SGL Keyed: Not Supported 00:08:44.622 SGL Bit Bucket Descriptor: Not Supported 00:08:44.622 SGL Metadata Pointer: Not Supported 00:08:44.622 Oversized SGL: Not Supported 00:08:44.622 SGL Metadata Address: Not Supported 00:08:44.622 SGL Offset: Not Supported 00:08:44.622 Transport SGL Data Block: Not Supported 00:08:44.622 Replay Protected Memory Block: Not Supported 00:08:44.622 00:08:44.622 Firmware Slot Information 00:08:44.622 ========================= 00:08:44.622 Active slot: 1 00:08:44.622 Slot 1 Firmware Revision: 1.0 00:08:44.622 00:08:44.622 00:08:44.622 Commands Supported and Effects 00:08:44.622 ============================== 00:08:44.622 Admin Commands 00:08:44.622 -------------- 00:08:44.622 Delete I/O Submission Queue (00h): Supported 00:08:44.622 Create I/O Submission Queue (01h): Supported 00:08:44.622 Get Log Page (02h): Supported 00:08:44.622 Delete I/O Completion Queue (04h): Supported 00:08:44.622 Create I/O Completion Queue (05h): Supported 00:08:44.623 Identify (06h): Supported 00:08:44.623 Abort (08h): Supported 00:08:44.623 Set Features (09h): Supported 00:08:44.623 Get Features (0Ah): Supported 00:08:44.623 Asynchronous Event Request (0Ch): Supported 00:08:44.623 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:44.623 Directive Send (19h): Supported 00:08:44.623 Directive Receive (1Ah): Supported 00:08:44.623 Virtualization Management (1Ch): Supported 00:08:44.623 Doorbell Buffer Config (7Ch): Supported 00:08:44.623 Format NVM (80h): Supported LBA-Change 00:08:44.623 I/O Commands 00:08:44.623 ------------ 00:08:44.623 Flush (00h): Supported LBA-Change 00:08:44.623 Write (01h): Supported LBA-Change 00:08:44.623 Read (02h): Supported 00:08:44.623 Compare (05h): Supported 00:08:44.623 Write Zeroes (08h): Supported LBA-Change 00:08:44.623 Dataset Management (09h): Supported LBA-Change 00:08:44.623 Unknown (0Ch): Supported 00:08:44.623 Unknown (12h): Supported 00:08:44.623 Copy (19h): Supported LBA-Change 00:08:44.623 Unknown (1Dh): Supported LBA-Change 00:08:44.623 00:08:44.623 Error Log 00:08:44.623 ========= 00:08:44.623 00:08:44.623 Arbitration 00:08:44.623 =========== 00:08:44.623 Arbitration Burst: no limit 00:08:44.623 00:08:44.623 Power Management 00:08:44.623 ================ 00:08:44.623 Number of Power States: 1 00:08:44.623 Current Power State: Power State #0 00:08:44.623 Power State #0: 00:08:44.623 Max Power: 25.00 W 00:08:44.623 Non-Operational State: Operational 00:08:44.623 Entry Latency: 16 microseconds 00:08:44.623 Exit Latency: 4 microseconds 00:08:44.623 Relative Read Throughput: 0 00:08:44.623 Relative Read Latency: 0 00:08:44.623 Relative Write Throughput: 0 00:08:44.623 Relative Write Latency: 0 00:08:44.623 Idle Power: Not Reported 00:08:44.623 Active Power: Not Reported 00:08:44.623 Non-Operational Permissive Mode: Not Supported 00:08:44.623 00:08:44.623 Health Information 00:08:44.623 ================== 00:08:44.623 Critical Warnings: 00:08:44.623 Available Spare Space: OK 00:08:44.623 Temperature: [2024-12-06 03:59:32.022457] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62856 terminated unexpected 00:08:44.623 OK 00:08:44.623 Device Reliability: OK 00:08:44.623 Read Only: No 00:08:44.623 Volatile Memory Backup: OK 00:08:44.623 Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.623 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:44.623 Available Spare: 0% 00:08:44.623 Available Spare Threshold: 0% 00:08:44.623 Life Percentage Used: 0% 00:08:44.623 Data Units Read: 1059 00:08:44.623 Data Units Written: 926 00:08:44.623 Host Read Commands: 58259 00:08:44.623 Host Write Commands: 57042 00:08:44.623 Controller Busy Time: 0 minutes 00:08:44.623 Power Cycles: 0 00:08:44.623 Power On Hours: 0 hours 00:08:44.623 Unsafe Shutdowns: 0 00:08:44.623 Unrecoverable Media Errors: 0 00:08:44.623 Lifetime Error Log Entries: 0 00:08:44.623 Warning Temperature Time: 0 minutes 00:08:44.623 Critical Temperature Time: 0 minutes 00:08:44.623 00:08:44.623 Number of Queues 00:08:44.623 ================ 00:08:44.623 Number of I/O Submission Queues: 64 00:08:44.623 Number of I/O Completion Queues: 64 00:08:44.623 00:08:44.623 ZNS Specific Controller Data 00:08:44.623 ============================ 00:08:44.623 Zone Append Size Limit: 0 00:08:44.623 00:08:44.623 00:08:44.623 Active Namespaces 00:08:44.623 ================= 00:08:44.623 Namespace ID:1 00:08:44.623 Error Recovery Timeout: Unlimited 00:08:44.623 Command Set Identifier: NVM (00h) 00:08:44.623 Deallocate: Supported 00:08:44.623 Deallocated/Unwritten Error: Supported 00:08:44.623 Deallocated Read Value: All 0x00 00:08:44.623 Deallocate in Write Zeroes: Not Supported 00:08:44.623 Deallocated Guard Field: 0xFFFF 00:08:44.623 Flush: Supported 00:08:44.623 Reservation: Not Supported 00:08:44.623 Namespace Sharing Capabilities: Private 00:08:44.623 Size (in LBAs): 1310720 (5GiB) 00:08:44.623 Capacity (in LBAs): 1310720 (5GiB) 00:08:44.623 Utilization (in LBAs): 1310720 (5GiB) 00:08:44.623 Thin Provisioning: Not Supported 00:08:44.623 Per-NS Atomic Units: No 00:08:44.623 Maximum Single Source Range Length: 128 00:08:44.623 Maximum Copy Length: 128 00:08:44.623 Maximum Source Range Count: 128 00:08:44.623 NGUID/EUI64 Never Reused: No 00:08:44.623 Namespace Write Protected: No 00:08:44.623 Number of LBA Formats: 8 00:08:44.623 Current LBA Format: LBA Format #04 00:08:44.623 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:44.623 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:44.623 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:44.623 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:44.623 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:44.623 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:44.623 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:44.623 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:44.623 00:08:44.623 NVM Specific Namespace Data 00:08:44.623 =========================== 00:08:44.623 Logical Block Storage Tag Mask: 0 00:08:44.623 Protection Information Capabilities: 00:08:44.623 16b Guard Protection Information Storage Tag Support: No 00:08:44.623 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:44.623 Storage Tag Check Read Support: No 00:08:44.623 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.623 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.624 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.624 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.624 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.624 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.624 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.624 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.624 ===================================================== 00:08:44.624 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.624 ===================================================== 00:08:44.624 Controller Capabilities/Features 00:08:44.624 ================================ 00:08:44.624 Vendor ID: 1b36 00:08:44.624 Subsystem Vendor ID: 1af4 00:08:44.624 Serial Number: 12343 00:08:44.624 Model Number: QEMU NVMe Ctrl 00:08:44.624 Firmware Version: 8.0.0 00:08:44.624 Recommended Arb Burst: 6 00:08:44.624 IEEE OUI Identifier: 00 54 52 00:08:44.624 Multi-path I/O 00:08:44.624 May have multiple subsystem ports: No 00:08:44.624 May have multiple controllers: Yes 00:08:44.624 Associated with SR-IOV VF: No 00:08:44.624 Max Data Transfer Size: 524288 00:08:44.624 Max Number of Namespaces: 256 00:08:44.624 Max Number of I/O Queues: 64 00:08:44.624 NVMe Specification Version (VS): 1.4 00:08:44.624 NVMe Specification Version (Identify): 1.4 00:08:44.624 Maximum Queue Entries: 2048 00:08:44.624 Contiguous Queues Required: Yes 00:08:44.624 Arbitration Mechanisms Supported 00:08:44.624 Weighted Round Robin: Not Supported 00:08:44.624 Vendor Specific: Not Supported 00:08:44.624 Reset Timeout: 7500 ms 00:08:44.624 Doorbell Stride: 4 bytes 00:08:44.624 NVM Subsystem Reset: Not Supported 00:08:44.624 Command Sets Supported 00:08:44.624 NVM Command Set: Supported 00:08:44.624 Boot Partition: Not Supported 00:08:44.624 Memory Page Size Minimum: 4096 bytes 00:08:44.624 Memory Page Size Maximum: 65536 bytes 00:08:44.624 Persistent Memory Region: Not Supported 00:08:44.624 Optional Asynchronous Events Supported 00:08:44.624 Namespace Attribute Notices: Supported 00:08:44.624 Firmware Activation Notices: Not Supported 00:08:44.624 ANA Change Notices: Not Supported 00:08:44.624 PLE Aggregate Log Change Notices: Not Supported 00:08:44.624 LBA Status Info Alert Notices: Not Supported 00:08:44.624 EGE Aggregate Log Change Notices: Not Supported 00:08:44.624 Normal NVM Subsystem Shutdown event: Not Supported 00:08:44.624 Zone Descriptor Change Notices: Not Supported 00:08:44.624 Discovery Log Change Notices: Not Supported 00:08:44.624 Controller Attributes 00:08:44.624 128-bit Host Identifier: Not Supported 00:08:44.624 Non-Operational Permissive Mode: Not Supported 00:08:44.624 NVM Sets: Not Supported 00:08:44.624 Read Recovery Levels: Not Supported 00:08:44.624 Endurance Groups: Supported 00:08:44.624 Predictable Latency Mode: Not Supported 00:08:44.624 Traffic Based Keep ALive: Not Supported 00:08:44.624 Namespace Granularity: Not Supported 00:08:44.624 SQ Associations: Not Supported 00:08:44.624 UUID List: Not Supported 00:08:44.624 Multi-Domain Subsystem: Not Supported 00:08:44.624 Fixed Capacity Management: Not Supported 00:08:44.624 Variable Capacity Management: Not Supported 00:08:44.624 Delete Endurance Group: Not Supported 00:08:44.624 Delete NVM Set: Not Supported 00:08:44.624 Extended LBA Formats Supported: Supported 00:08:44.624 Flexible Data Placement Supported: Supported 00:08:44.624 00:08:44.624 Controller Memory Buffer Support 00:08:44.624 ================================ 00:08:44.624 Supported: No 00:08:44.624 00:08:44.624 Persistent Memory Region Support 00:08:44.624 ================================ 00:08:44.624 Supported: No 00:08:44.624 00:08:44.624 Admin Command Set Attributes 00:08:44.624 ============================ 00:08:44.624 Security Send/Receive: Not Supported 00:08:44.624 Format NVM: Supported 00:08:44.624 Firmware Activate/Download: Not Supported 00:08:44.624 Namespace Management: Supported 00:08:44.624 Device Self-Test: Not Supported 00:08:44.624 Directives: Supported 00:08:44.624 NVMe-MI: Not Supported 00:08:44.624 Virtualization Management: Not Supported 00:08:44.624 Doorbell Buffer Config: Supported 00:08:44.624 Get LBA Status Capability: Not Supported 00:08:44.624 Command & Feature Lockdown Capability: Not Supported 00:08:44.624 Abort Command Limit: 4 00:08:44.624 Async Event Request Limit: 4 00:08:44.624 Number of Firmware Slots: N/A 00:08:44.624 Firmware Slot 1 Read-Only: N/A 00:08:44.624 Firmware Activation Without Reset: N/A 00:08:44.624 Multiple Update Detection Support: N/A 00:08:44.624 Firmware Update Granularity: No Information Provided 00:08:44.624 Per-Namespace SMART Log: Yes 00:08:44.624 Asymmetric Namespace Access Log Page: Not Supported 00:08:44.624 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:44.624 Command Effects Log Page: Supported 00:08:44.624 Get Log Page Extended Data: Supported 00:08:44.624 Telemetry Log Pages: Not Supported 00:08:44.624 Persistent Event Log Pages: Not Supported 00:08:44.624 Supported Log Pages Log Page: May Support 00:08:44.624 Commands Supported & Effects Log Page: Not Supported 00:08:44.624 Feature Identifiers & Effects Log Page:May Support 00:08:44.624 NVMe-MI Commands & Effects Log Page: May Support 00:08:44.624 Data Area 4 for Telemetry Log: Not Supported 00:08:44.624 Error Log Page Entries Supported: 1 00:08:44.624 Keep Alive: Not Supported 00:08:44.624 00:08:44.624 NVM Command Set Attributes 00:08:44.624 ========================== 00:08:44.624 Submission Queue Entry Size 00:08:44.624 Max: 64 00:08:44.624 Min: 64 00:08:44.624 Completion Queue Entry Size 00:08:44.625 Max: 16 00:08:44.625 Min: 16 00:08:44.625 Number of Namespaces: 256 00:08:44.625 Compare Command: Supported 00:08:44.625 Write Uncorrectable Command: Not Supported 00:08:44.625 Dataset Management Command: Supported 00:08:44.625 Write Zeroes Command: Supported 00:08:44.625 Set Features Save Field: Supported 00:08:44.625 Reservations: Not Supported 00:08:44.625 Timestamp: Supported 00:08:44.625 Copy: Supported 00:08:44.625 Volatile Write Cache: Present 00:08:44.625 Atomic Write Unit (Normal): 1 00:08:44.625 Atomic Write Unit (PFail): 1 00:08:44.625 Atomic Compare & Write Unit: 1 00:08:44.625 Fused Compare & Write: Not Supported 00:08:44.625 Scatter-Gather List 00:08:44.625 SGL Command Set: Supported 00:08:44.625 SGL Keyed: Not Supported 00:08:44.625 SGL Bit Bucket Descriptor: Not Supported 00:08:44.625 SGL Metadata Pointer: Not Supported 00:08:44.625 Oversized SGL: Not Supported 00:08:44.625 SGL Metadata Address: Not Supported 00:08:44.625 SGL Offset: Not Supported 00:08:44.625 Transport SGL Data Block: Not Supported 00:08:44.625 Replay Protected Memory Block: Not Supported 00:08:44.625 00:08:44.625 Firmware Slot Information 00:08:44.625 ========================= 00:08:44.625 Active slot: 1 00:08:44.625 Slot 1 Firmware Revision: 1.0 00:08:44.625 00:08:44.625 00:08:44.625 Commands Supported and Effects 00:08:44.625 ============================== 00:08:44.625 Admin Commands 00:08:44.625 -------------- 00:08:44.625 Delete I/O Submission Queue (00h): Supported 00:08:44.625 Create I/O Submission Queue (01h): Supported 00:08:44.625 Get Log Page (02h): Supported 00:08:44.625 Delete I/O Completion Queue (04h): Supported 00:08:44.625 Create I/O Completion Queue (05h): Supported 00:08:44.625 Identify (06h): Supported 00:08:44.625 Abort (08h): Supported 00:08:44.625 Set Features (09h): Supported 00:08:44.625 Get Features (0Ah): Supported 00:08:44.625 Asynchronous Event Request (0Ch): Supported 00:08:44.625 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:44.625 Directive Send (19h): Supported 00:08:44.625 Directive Receive (1Ah): Supported 00:08:44.625 Virtualization Management (1Ch): Supported 00:08:44.625 Doorbell Buffer Config (7Ch): Supported 00:08:44.625 Format NVM (80h): Supported LBA-Change 00:08:44.625 I/O Commands 00:08:44.625 ------------ 00:08:44.625 Flush (00h): Supported LBA-Change 00:08:44.625 Write (01h): Supported LBA-Change 00:08:44.625 Read (02h): Supported 00:08:44.625 Compare (05h): Supported 00:08:44.625 Write Zeroes (08h): Supported LBA-Change 00:08:44.625 Dataset Management (09h): Supported LBA-Change 00:08:44.625 Unknown (0Ch): Supported 00:08:44.625 Unknown (12h): Supported 00:08:44.625 Copy (19h): Supported LBA-Change 00:08:44.625 Unknown (1Dh): Supported LBA-Change 00:08:44.625 00:08:44.625 Error Log 00:08:44.625 ========= 00:08:44.625 00:08:44.625 Arbitration 00:08:44.625 =========== 00:08:44.625 Arbitration Burst: no limit 00:08:44.625 00:08:44.625 Power Management 00:08:44.625 ================ 00:08:44.625 Number of Power States: 1 00:08:44.625 Current Power State: Power State #0 00:08:44.625 Power State #0: 00:08:44.625 Max Power: 25.00 W 00:08:44.625 Non-Operational State: Operational 00:08:44.625 Entry Latency: 16 microseconds 00:08:44.625 Exit Latency: 4 microseconds 00:08:44.625 Relative Read Throughput: 0 00:08:44.625 Relative Read Latency: 0 00:08:44.625 Relative Write Throughput: 0 00:08:44.625 Relative Write Latency: 0 00:08:44.625 Idle Power: Not Reported 00:08:44.625 Active Power: Not Reported 00:08:44.625 Non-Operational Permissive Mode: Not Supported 00:08:44.625 00:08:44.625 Health Information 00:08:44.625 ================== 00:08:44.625 Critical Warnings: 00:08:44.625 Available Spare Space: OK 00:08:44.625 Temperature: OK 00:08:44.625 Device Reliability: OK 00:08:44.625 Read Only: No 00:08:44.625 Volatile Memory Backup: OK 00:08:44.625 Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.625 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:44.625 Available Spare: 0% 00:08:44.625 Available Spare Threshold: 0% 00:08:44.625 Life Percentage Used: 0% 00:08:44.625 Data Units Read: 827 00:08:44.625 Data Units Written: 756 00:08:44.625 Host Read Commands: 41099 00:08:44.625 Host Write Commands: 40522 00:08:44.625 Controller Busy Time: 0 minutes 00:08:44.625 Power Cycles: 0 00:08:44.625 Power On Hours: 0 hours 00:08:44.625 Unsafe Shutdowns: 0 00:08:44.625 Unrecoverable Media Errors: 0 00:08:44.625 Lifetime Error Log Entries: 0 00:08:44.625 Warning Temperature Time: 0 minutes 00:08:44.625 Critical Temperature Time: 0 minutes 00:08:44.625 00:08:44.625 Number of Queues 00:08:44.625 ================ 00:08:44.625 Number of I/O Submission Queues: 64 00:08:44.625 Number of I/O Completion Queues: 64 00:08:44.625 00:08:44.625 ZNS Specific Controller Data 00:08:44.625 ============================ 00:08:44.625 Zone Append Size Limit: 0 00:08:44.625 00:08:44.625 00:08:44.625 Active Namespaces 00:08:44.625 ================= 00:08:44.625 Namespace ID:1 00:08:44.625 Error Recovery Timeout: Unlimited 00:08:44.625 Command Set Identifier: NVM (00h) 00:08:44.625 Deallocate: Supported 00:08:44.625 Deallocated/Unwritten Error: Supported 00:08:44.625 Deallocated Read Value: All 0x00 00:08:44.625 Deallocate in Write Zeroes: Not Supported 00:08:44.626 Deallocated Guard Field: 0xFFFF 00:08:44.626 Flush: Supported 00:08:44.626 Reservation: Not Supported 00:08:44.626 Namespace Sharing Capabilities: Multiple Controllers 00:08:44.626 Size (in LBAs): 262144 (1GiB) 00:08:44.626 Capacity (in LBAs): 262144 (1GiB) 00:08:44.626 Utilization (in LBAs): 262144 (1GiB) 00:08:44.626 Thin Provisioning: Not Supported 00:08:44.626 Per-NS Atomic Units: No 00:08:44.626 Maximum Single Source Range Length: 128 00:08:44.626 Maximum Copy Length: 128 00:08:44.626 Maximum Source Range Count: 128 00:08:44.626 NGUID/EUI64 Never Reused: No 00:08:44.626 Namespace Write Protected: No 00:08:44.626 Endurance group ID: 1 00:08:44.626 Number of LBA Formats: 8 00:08:44.626 Current LBA Format: LBA Format #04 00:08:44.626 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:44.626 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:44.626 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:44.626 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:44.626 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:44.626 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:44.626 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:44.626 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:44.626 00:08:44.626 Get Feature FDP: 00:08:44.626 ================ 00:08:44.626 Enabled: Yes 00:08:44.626 FDP configuration index: 0 00:08:44.626 00:08:44.626 FDP configurations log page 00:08:44.626 =========================== 00:08:44.626 Number of FDP configurations: 1 00:08:44.626 Version: 0 00:08:44.626 Size: 112 00:08:44.626 FDP Configuration Descriptor: 0 00:08:44.626 Descriptor Size: 96 00:08:44.626 Reclaim Group Identifier format: 2 00:08:44.626 FDP Volatile Write Cache: Not Present 00:08:44.626 FDP Configuration: Valid 00:08:44.626 Vendor Specific Size: 0 00:08:44.626 Number of Reclaim Groups: 2 00:08:44.626 Number of Recalim Unit Handles: 8 00:08:44.626 Max Placement Identifiers: 128 00:08:44.626 Number of Namespaces Suppprted: 256 00:08:44.626 Reclaim unit Nominal Size: 6000000 bytes 00:08:44.626 Estimated Reclaim Unit Time Limit: Not Reported 00:08:44.626 RUH Desc #000: RUH Type: Initially Isolated 00:08:44.626 RUH Desc #001: RUH Type: Initially Isolated 00:08:44.626 RUH Desc #002: RUH Type: Initially Isolated 00:08:44.626 RUH Desc #003: RUH Type: Initially Isolated 00:08:44.626 RUH Desc #004: RUH Type: Initially Isolated 00:08:44.626 RUH Desc #005: RUH Type: Initially Isolated 00:08:44.626 RUH Desc #006: RUH Type: Initially Isolated 00:08:44.626 RUH Desc #007: RUH Type: Initially Isolated 00:08:44.626 00:08:44.626 FDP reclaim unit handle usage log page 00:08:44.626 ====================================== 00:08:44.626 Number of Reclaim Unit Handles: 8 00:08:44.626 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:44.626 RUH Usage Desc #001: RUH Attributes: Unused 00:08:44.626 RUH Usage Desc #002: RUH Attributes: Unused 00:08:44.626 RUH Usage Desc #003: RUH Attributes: Unused 00:08:44.626 RUH Usage Desc #004: RUH Attributes: Unused 00:08:44.626 RUH Usage Desc #005: RUH Attributes: Unused 00:08:44.626 RUH Usage Desc #006: RUH Attributes: Unused 00:08:44.626 RUH Usage Desc #007: RUH Attributes: Unused 00:08:44.626 00:08:44.626 FDP statistics log page 00:08:44.626 ======================= 00:08:44.626 Host bytes with metadata written: 414162944 00:08:44.626 Medi[2024-12-06 03:59:32.023724] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62856 terminated unexpected 00:08:44.626 a bytes with metadata written: 414236672 00:08:44.626 Media bytes erased: 0 00:08:44.626 00:08:44.626 FDP events log page 00:08:44.626 =================== 00:08:44.626 Number of FDP events: 0 00:08:44.626 00:08:44.626 NVM Specific Namespace Data 00:08:44.626 =========================== 00:08:44.626 Logical Block Storage Tag Mask: 0 00:08:44.626 Protection Information Capabilities: 00:08:44.626 16b Guard Protection Information Storage Tag Support: No 00:08:44.626 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:44.626 Storage Tag Check Read Support: No 00:08:44.626 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.626 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.626 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.626 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.626 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.626 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.626 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.626 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.626 ===================================================== 00:08:44.626 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.626 ===================================================== 00:08:44.626 Controller Capabilities/Features 00:08:44.626 ================================ 00:08:44.626 Vendor ID: 1b36 00:08:44.626 Subsystem Vendor ID: 1af4 00:08:44.626 Serial Number: 12342 00:08:44.626 Model Number: QEMU NVMe Ctrl 00:08:44.626 Firmware Version: 8.0.0 00:08:44.626 Recommended Arb Burst: 6 00:08:44.626 IEEE OUI Identifier: 00 54 52 00:08:44.626 Multi-path I/O 00:08:44.626 May have multiple subsystem ports: No 00:08:44.626 May have multiple controllers: No 00:08:44.626 Associated with SR-IOV VF: No 00:08:44.626 Max Data Transfer Size: 524288 00:08:44.626 Max Number of Namespaces: 256 00:08:44.626 Max Number of I/O Queues: 64 00:08:44.626 NVMe Specification Version (VS): 1.4 00:08:44.626 NVMe Specification Version (Identify): 1.4 00:08:44.626 Maximum Queue Entries: 2048 00:08:44.626 Contiguous Queues Required: Yes 00:08:44.626 Arbitration Mechanisms Supported 00:08:44.627 Weighted Round Robin: Not Supported 00:08:44.627 Vendor Specific: Not Supported 00:08:44.627 Reset Timeout: 7500 ms 00:08:44.627 Doorbell Stride: 4 bytes 00:08:44.627 NVM Subsystem Reset: Not Supported 00:08:44.627 Command Sets Supported 00:08:44.627 NVM Command Set: Supported 00:08:44.627 Boot Partition: Not Supported 00:08:44.627 Memory Page Size Minimum: 4096 bytes 00:08:44.627 Memory Page Size Maximum: 65536 bytes 00:08:44.627 Persistent Memory Region: Not Supported 00:08:44.627 Optional Asynchronous Events Supported 00:08:44.627 Namespace Attribute Notices: Supported 00:08:44.627 Firmware Activation Notices: Not Supported 00:08:44.627 ANA Change Notices: Not Supported 00:08:44.627 PLE Aggregate Log Change Notices: Not Supported 00:08:44.627 LBA Status Info Alert Notices: Not Supported 00:08:44.627 EGE Aggregate Log Change Notices: Not Supported 00:08:44.627 Normal NVM Subsystem Shutdown event: Not Supported 00:08:44.627 Zone Descriptor Change Notices: Not Supported 00:08:44.627 Discovery Log Change Notices: Not Supported 00:08:44.627 Controller Attributes 00:08:44.627 128-bit Host Identifier: Not Supported 00:08:44.627 Non-Operational Permissive Mode: Not Supported 00:08:44.627 NVM Sets: Not Supported 00:08:44.627 Read Recovery Levels: Not Supported 00:08:44.627 Endurance Groups: Not Supported 00:08:44.627 Predictable Latency Mode: Not Supported 00:08:44.627 Traffic Based Keep ALive: Not Supported 00:08:44.627 Namespace Granularity: Not Supported 00:08:44.627 SQ Associations: Not Supported 00:08:44.627 UUID List: Not Supported 00:08:44.627 Multi-Domain Subsystem: Not Supported 00:08:44.627 Fixed Capacity Management: Not Supported 00:08:44.627 Variable Capacity Management: Not Supported 00:08:44.627 Delete Endurance Group: Not Supported 00:08:44.627 Delete NVM Set: Not Supported 00:08:44.627 Extended LBA Formats Supported: Supported 00:08:44.627 Flexible Data Placement Supported: Not Supported 00:08:44.627 00:08:44.627 Controller Memory Buffer Support 00:08:44.627 ================================ 00:08:44.627 Supported: No 00:08:44.627 00:08:44.627 Persistent Memory Region Support 00:08:44.627 ================================ 00:08:44.627 Supported: No 00:08:44.627 00:08:44.627 Admin Command Set Attributes 00:08:44.627 ============================ 00:08:44.627 Security Send/Receive: Not Supported 00:08:44.627 Format NVM: Supported 00:08:44.627 Firmware Activate/Download: Not Supported 00:08:44.627 Namespace Management: Supported 00:08:44.627 Device Self-Test: Not Supported 00:08:44.627 Directives: Supported 00:08:44.627 NVMe-MI: Not Supported 00:08:44.627 Virtualization Management: Not Supported 00:08:44.627 Doorbell Buffer Config: Supported 00:08:44.627 Get LBA Status Capability: Not Supported 00:08:44.627 Command & Feature Lockdown Capability: Not Supported 00:08:44.627 Abort Command Limit: 4 00:08:44.627 Async Event Request Limit: 4 00:08:44.627 Number of Firmware Slots: N/A 00:08:44.627 Firmware Slot 1 Read-Only: N/A 00:08:44.627 Firmware Activation Without Reset: N/A 00:08:44.627 Multiple Update Detection Support: N/A 00:08:44.627 Firmware Update Granularity: No Information Provided 00:08:44.627 Per-Namespace SMART Log: Yes 00:08:44.627 Asymmetric Namespace Access Log Page: Not Supported 00:08:44.627 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:44.627 Command Effects Log Page: Supported 00:08:44.627 Get Log Page Extended Data: Supported 00:08:44.627 Telemetry Log Pages: Not Supported 00:08:44.627 Persistent Event Log Pages: Not Supported 00:08:44.627 Supported Log Pages Log Page: May Support 00:08:44.627 Commands Supported & Effects Log Page: Not Supported 00:08:44.627 Feature Identifiers & Effects Log Page:May Support 00:08:44.627 NVMe-MI Commands & Effects Log Page: May Support 00:08:44.627 Data Area 4 for Telemetry Log: Not Supported 00:08:44.627 Error Log Page Entries Supported: 1 00:08:44.627 Keep Alive: Not Supported 00:08:44.627 00:08:44.627 NVM Command Set Attributes 00:08:44.627 ========================== 00:08:44.627 Submission Queue Entry Size 00:08:44.627 Max: 64 00:08:44.627 Min: 64 00:08:44.627 Completion Queue Entry Size 00:08:44.627 Max: 16 00:08:44.627 Min: 16 00:08:44.627 Number of Namespaces: 256 00:08:44.627 Compare Command: Supported 00:08:44.628 Write Uncorrectable Command: Not Supported 00:08:44.628 Dataset Management Command: Supported 00:08:44.628 Write Zeroes Command: Supported 00:08:44.628 Set Features Save Field: Supported 00:08:44.628 Reservations: Not Supported 00:08:44.628 Timestamp: Supported 00:08:44.628 Copy: Supported 00:08:44.628 Volatile Write Cache: Present 00:08:44.628 Atomic Write Unit (Normal): 1 00:08:44.628 Atomic Write Unit (PFail): 1 00:08:44.628 Atomic Compare & Write Unit: 1 00:08:44.628 Fused Compare & Write: Not Supported 00:08:44.628 Scatter-Gather List 00:08:44.628 SGL Command Set: Supported 00:08:44.628 SGL Keyed: Not Supported 00:08:44.628 SGL Bit Bucket Descriptor: Not Supported 00:08:44.628 SGL Metadata Pointer: Not Supported 00:08:44.628 Oversized SGL: Not Supported 00:08:44.628 SGL Metadata Address: Not Supported 00:08:44.628 SGL Offset: Not Supported 00:08:44.628 Transport SGL Data Block: Not Supported 00:08:44.628 Replay Protected Memory Block: Not Supported 00:08:44.628 00:08:44.628 Firmware Slot Information 00:08:44.628 ========================= 00:08:44.628 Active slot: 1 00:08:44.628 Slot 1 Firmware Revision: 1.0 00:08:44.628 00:08:44.628 00:08:44.628 Commands Supported and Effects 00:08:44.628 ============================== 00:08:44.628 Admin Commands 00:08:44.628 -------------- 00:08:44.628 Delete I/O Submission Queue (00h): Supported 00:08:44.628 Create I/O Submission Queue (01h): Supported 00:08:44.628 Get Log Page (02h): Supported 00:08:44.628 Delete I/O Completion Queue (04h): Supported 00:08:44.628 Create I/O Completion Queue (05h): Supported 00:08:44.628 Identify (06h): Supported 00:08:44.628 Abort (08h): Supported 00:08:44.628 Set Features (09h): Supported 00:08:44.628 Get Features (0Ah): Supported 00:08:44.628 Asynchronous Event Request (0Ch): Supported 00:08:44.628 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:44.628 Directive Send (19h): Supported 00:08:44.628 Directive Receive (1Ah): Supported 00:08:44.628 Virtualization Management (1Ch): Supported 00:08:44.628 Doorbell Buffer Config (7Ch): Supported 00:08:44.628 Format NVM (80h): Supported LBA-Change 00:08:44.628 I/O Commands 00:08:44.628 ------------ 00:08:44.628 Flush (00h): Supported LBA-Change 00:08:44.628 Write (01h): Supported LBA-Change 00:08:44.628 Read (02h): Supported 00:08:44.628 Compare (05h): Supported 00:08:44.628 Write Zeroes (08h): Supported LBA-Change 00:08:44.628 Dataset Management (09h): Supported LBA-Change 00:08:44.628 Unknown (0Ch): Supported 00:08:44.628 Unknown (12h): Supported 00:08:44.628 Copy (19h): Supported LBA-Change 00:08:44.628 Unknown (1Dh): Supported LBA-Change 00:08:44.628 00:08:44.628 Error Log 00:08:44.628 ========= 00:08:44.628 00:08:44.628 Arbitration 00:08:44.628 =========== 00:08:44.628 Arbitration Burst: no limit 00:08:44.628 00:08:44.628 Power Management 00:08:44.628 ================ 00:08:44.628 Number of Power States: 1 00:08:44.628 Current Power State: Power State #0 00:08:44.628 Power State #0: 00:08:44.628 Max Power: 25.00 W 00:08:44.628 Non-Operational State: Operational 00:08:44.628 Entry Latency: 16 microseconds 00:08:44.628 Exit Latency: 4 microseconds 00:08:44.628 Relative Read Throughput: 0 00:08:44.628 Relative Read Latency: 0 00:08:44.628 Relative Write Throughput: 0 00:08:44.628 Relative Write Latency: 0 00:08:44.628 Idle Power: Not Reported 00:08:44.628 Active Power: Not Reported 00:08:44.628 Non-Operational Permissive Mode: Not Supported 00:08:44.628 00:08:44.628 Health Information 00:08:44.628 ================== 00:08:44.628 Critical Warnings: 00:08:44.628 Available Spare Space: OK 00:08:44.628 Temperature: OK 00:08:44.628 Device Reliability: OK 00:08:44.628 Read Only: No 00:08:44.628 Volatile Memory Backup: OK 00:08:44.628 Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.628 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:44.628 Available Spare: 0% 00:08:44.628 Available Spare Threshold: 0% 00:08:44.628 Life Percentage Used: 0% 00:08:44.628 Data Units Read: 2175 00:08:44.628 Data Units Written: 1962 00:08:44.628 Host Read Commands: 120372 00:08:44.628 Host Write Commands: 118642 00:08:44.628 Controller Busy Time: 0 minutes 00:08:44.628 Power Cycles: 0 00:08:44.628 Power On Hours: 0 hours 00:08:44.628 Unsafe Shutdowns: 0 00:08:44.628 Unrecoverable Media Errors: 0 00:08:44.628 Lifetime Error Log Entries: 0 00:08:44.628 Warning Temperature Time: 0 minutes 00:08:44.628 Critical Temperature Time: 0 minutes 00:08:44.628 00:08:44.628 Number of Queues 00:08:44.628 ================ 00:08:44.628 Number of I/O Submission Queues: 64 00:08:44.628 Number of I/O Completion Queues: 64 00:08:44.628 00:08:44.628 ZNS Specific Controller Data 00:08:44.628 ============================ 00:08:44.628 Zone Append Size Limit: 0 00:08:44.628 00:08:44.628 00:08:44.628 Active Namespaces 00:08:44.628 ================= 00:08:44.628 Namespace ID:1 00:08:44.628 Error Recovery Timeout: Unlimited 00:08:44.628 Command Set Identifier: NVM (00h) 00:08:44.628 Deallocate: Supported 00:08:44.628 Deallocated/Unwritten Error: Supported 00:08:44.628 Deallocated Read Value: All 0x00 00:08:44.628 Deallocate in Write Zeroes: Not Supported 00:08:44.628 Deallocated Guard Field: 0xFFFF 00:08:44.628 Flush: Supported 00:08:44.628 Reservation: Not Supported 00:08:44.629 Namespace Sharing Capabilities: Private 00:08:44.629 Size (in LBAs): 1048576 (4GiB) 00:08:44.629 Capacity (in LBAs): 1048576 (4GiB) 00:08:44.629 Utilization (in LBAs): 1048576 (4GiB) 00:08:44.629 Thin Provisioning: Not Supported 00:08:44.629 Per-NS Atomic Units: No 00:08:44.629 Maximum Single Source Range Length: 128 00:08:44.629 Maximum Copy Length: 128 00:08:44.629 Maximum Source Range Count: 128 00:08:44.629 NGUID/EUI64 Never Reused: No 00:08:44.629 Namespace Write Protected: No 00:08:44.629 Number of LBA Formats: 8 00:08:44.629 Current LBA Format: LBA Format #04 00:08:44.629 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:44.629 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:44.629 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:44.629 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:44.629 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:44.629 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:44.629 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:44.629 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:44.629 00:08:44.629 NVM Specific Namespace Data 00:08:44.629 =========================== 00:08:44.629 Logical Block Storage Tag Mask: 0 00:08:44.629 Protection Information Capabilities: 00:08:44.629 16b Guard Protection Information Storage Tag Support: No 00:08:44.629 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:44.629 Storage Tag Check Read Support: No 00:08:44.629 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Namespace ID:2 00:08:44.629 Error Recovery Timeout: Unlimited 00:08:44.629 Command Set Identifier: NVM (00h) 00:08:44.629 Deallocate: Supported 00:08:44.629 Deallocated/Unwritten Error: Supported 00:08:44.629 Deallocated Read Value: All 0x00 00:08:44.629 Deallocate in Write Zeroes: Not Supported 00:08:44.629 Deallocated Guard Field: 0xFFFF 00:08:44.629 Flush: Supported 00:08:44.629 Reservation: Not Supported 00:08:44.629 Namespace Sharing Capabilities: Private 00:08:44.629 Size (in LBAs): 1048576 (4GiB) 00:08:44.629 Capacity (in LBAs): 1048576 (4GiB) 00:08:44.629 Utilization (in LBAs): 1048576 (4GiB) 00:08:44.629 Thin Provisioning: Not Supported 00:08:44.629 Per-NS Atomic Units: No 00:08:44.629 Maximum Single Source Range Length: 128 00:08:44.629 Maximum Copy Length: 128 00:08:44.629 Maximum Source Range Count: 128 00:08:44.629 NGUID/EUI64 Never Reused: No 00:08:44.629 Namespace Write Protected: No 00:08:44.629 Number of LBA Formats: 8 00:08:44.629 Current LBA Format: LBA Format #04 00:08:44.629 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:44.629 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:44.629 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:44.629 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:44.629 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:44.629 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:44.629 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:44.629 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:44.629 00:08:44.629 NVM Specific Namespace Data 00:08:44.629 =========================== 00:08:44.629 Logical Block Storage Tag Mask: 0 00:08:44.629 Protection Information Capabilities: 00:08:44.629 16b Guard Protection Information Storage Tag Support: No 00:08:44.629 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:44.629 Storage Tag Check Read Support: No 00:08:44.629 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.629 Namespace ID:3 00:08:44.629 Error Recovery Timeout: Unlimited 00:08:44.629 Command Set Identifier: NVM (00h) 00:08:44.629 Deallocate: Supported 00:08:44.629 Deallocated/Unwritten Error: Supported 00:08:44.629 Deallocated Read Value: All 0x00 00:08:44.629 Deallocate in Write Zeroes: Not Supported 00:08:44.629 Deallocated Guard Field: 0xFFFF 00:08:44.629 Flush: Supported 00:08:44.629 Reservation: Not Supported 00:08:44.629 Namespace Sharing Capabilities: Private 00:08:44.629 Size (in LBAs): 1048576 (4GiB) 00:08:44.629 Capacity (in LBAs): 1048576 (4GiB) 00:08:44.629 Utilization (in LBAs): 1048576 (4GiB) 00:08:44.629 Thin Provisioning: Not Supported 00:08:44.629 Per-NS Atomic Units: No 00:08:44.629 Maximum Single Source Range Length: 128 00:08:44.629 Maximum Copy Length: 128 00:08:44.629 Maximum Source Range Count: 128 00:08:44.629 NGUID/EUI64 Never Reused: No 00:08:44.629 Namespace Write Protected: No 00:08:44.629 Number of LBA Formats: 8 00:08:44.629 Current LBA Format: LBA Format #04 00:08:44.629 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:44.629 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:44.629 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:44.629 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:44.629 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:44.629 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:44.629 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:44.630 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:44.630 00:08:44.630 NVM Specific Namespace Data 00:08:44.630 =========================== 00:08:44.630 Logical Block Storage Tag Mask: 0 00:08:44.630 Protection Information Capabilities: 00:08:44.630 16b Guard Protection Information Storage Tag Support: No 00:08:44.630 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:44.630 Storage Tag Check Read Support: No 00:08:44.630 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.630 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.630 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.630 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.630 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.630 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.630 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.630 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.630 03:59:32 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:44.630 03:59:32 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:44.891 ===================================================== 00:08:44.891 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.891 ===================================================== 00:08:44.891 Controller Capabilities/Features 00:08:44.891 ================================ 00:08:44.891 Vendor ID: 1b36 00:08:44.891 Subsystem Vendor ID: 1af4 00:08:44.891 Serial Number: 12340 00:08:44.891 Model Number: QEMU NVMe Ctrl 00:08:44.891 Firmware Version: 8.0.0 00:08:44.891 Recommended Arb Burst: 6 00:08:44.891 IEEE OUI Identifier: 00 54 52 00:08:44.891 Multi-path I/O 00:08:44.891 May have multiple subsystem ports: No 00:08:44.891 May have multiple controllers: No 00:08:44.891 Associated with SR-IOV VF: No 00:08:44.891 Max Data Transfer Size: 524288 00:08:44.891 Max Number of Namespaces: 256 00:08:44.891 Max Number of I/O Queues: 64 00:08:44.891 NVMe Specification Version (VS): 1.4 00:08:44.891 NVMe Specification Version (Identify): 1.4 00:08:44.891 Maximum Queue Entries: 2048 00:08:44.891 Contiguous Queues Required: Yes 00:08:44.891 Arbitration Mechanisms Supported 00:08:44.891 Weighted Round Robin: Not Supported 00:08:44.891 Vendor Specific: Not Supported 00:08:44.891 Reset Timeout: 7500 ms 00:08:44.891 Doorbell Stride: 4 bytes 00:08:44.891 NVM Subsystem Reset: Not Supported 00:08:44.891 Command Sets Supported 00:08:44.891 NVM Command Set: Supported 00:08:44.891 Boot Partition: Not Supported 00:08:44.891 Memory Page Size Minimum: 4096 bytes 00:08:44.891 Memory Page Size Maximum: 65536 bytes 00:08:44.891 Persistent Memory Region: Not Supported 00:08:44.891 Optional Asynchronous Events Supported 00:08:44.891 Namespace Attribute Notices: Supported 00:08:44.891 Firmware Activation Notices: Not Supported 00:08:44.891 ANA Change Notices: Not Supported 00:08:44.891 PLE Aggregate Log Change Notices: Not Supported 00:08:44.891 LBA Status Info Alert Notices: Not Supported 00:08:44.891 EGE Aggregate Log Change Notices: Not Supported 00:08:44.891 Normal NVM Subsystem Shutdown event: Not Supported 00:08:44.891 Zone Descriptor Change Notices: Not Supported 00:08:44.891 Discovery Log Change Notices: Not Supported 00:08:44.891 Controller Attributes 00:08:44.891 128-bit Host Identifier: Not Supported 00:08:44.891 Non-Operational Permissive Mode: Not Supported 00:08:44.891 NVM Sets: Not Supported 00:08:44.891 Read Recovery Levels: Not Supported 00:08:44.891 Endurance Groups: Not Supported 00:08:44.891 Predictable Latency Mode: Not Supported 00:08:44.891 Traffic Based Keep ALive: Not Supported 00:08:44.891 Namespace Granularity: Not Supported 00:08:44.891 SQ Associations: Not Supported 00:08:44.891 UUID List: Not Supported 00:08:44.891 Multi-Domain Subsystem: Not Supported 00:08:44.891 Fixed Capacity Management: Not Supported 00:08:44.891 Variable Capacity Management: Not Supported 00:08:44.891 Delete Endurance Group: Not Supported 00:08:44.891 Delete NVM Set: Not Supported 00:08:44.891 Extended LBA Formats Supported: Supported 00:08:44.891 Flexible Data Placement Supported: Not Supported 00:08:44.891 00:08:44.891 Controller Memory Buffer Support 00:08:44.891 ================================ 00:08:44.891 Supported: No 00:08:44.891 00:08:44.891 Persistent Memory Region Support 00:08:44.891 ================================ 00:08:44.891 Supported: No 00:08:44.891 00:08:44.891 Admin Command Set Attributes 00:08:44.891 ============================ 00:08:44.891 Security Send/Receive: Not Supported 00:08:44.891 Format NVM: Supported 00:08:44.891 Firmware Activate/Download: Not Supported 00:08:44.891 Namespace Management: Supported 00:08:44.891 Device Self-Test: Not Supported 00:08:44.891 Directives: Supported 00:08:44.891 NVMe-MI: Not Supported 00:08:44.891 Virtualization Management: Not Supported 00:08:44.891 Doorbell Buffer Config: Supported 00:08:44.891 Get LBA Status Capability: Not Supported 00:08:44.891 Command & Feature Lockdown Capability: Not Supported 00:08:44.891 Abort Command Limit: 4 00:08:44.891 Async Event Request Limit: 4 00:08:44.891 Number of Firmware Slots: N/A 00:08:44.891 Firmware Slot 1 Read-Only: N/A 00:08:44.891 Firmware Activation Without Reset: N/A 00:08:44.891 Multiple Update Detection Support: N/A 00:08:44.891 Firmware Update Granularity: No Information Provided 00:08:44.891 Per-Namespace SMART Log: Yes 00:08:44.891 Asymmetric Namespace Access Log Page: Not Supported 00:08:44.891 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:44.891 Command Effects Log Page: Supported 00:08:44.891 Get Log Page Extended Data: Supported 00:08:44.891 Telemetry Log Pages: Not Supported 00:08:44.891 Persistent Event Log Pages: Not Supported 00:08:44.891 Supported Log Pages Log Page: May Support 00:08:44.891 Commands Supported & Effects Log Page: Not Supported 00:08:44.891 Feature Identifiers & Effects Log Page:May Support 00:08:44.891 NVMe-MI Commands & Effects Log Page: May Support 00:08:44.891 Data Area 4 for Telemetry Log: Not Supported 00:08:44.891 Error Log Page Entries Supported: 1 00:08:44.891 Keep Alive: Not Supported 00:08:44.891 00:08:44.891 NVM Command Set Attributes 00:08:44.891 ========================== 00:08:44.891 Submission Queue Entry Size 00:08:44.891 Max: 64 00:08:44.891 Min: 64 00:08:44.891 Completion Queue Entry Size 00:08:44.891 Max: 16 00:08:44.891 Min: 16 00:08:44.891 Number of Namespaces: 256 00:08:44.891 Compare Command: Supported 00:08:44.891 Write Uncorrectable Command: Not Supported 00:08:44.891 Dataset Management Command: Supported 00:08:44.891 Write Zeroes Command: Supported 00:08:44.891 Set Features Save Field: Supported 00:08:44.891 Reservations: Not Supported 00:08:44.891 Timestamp: Supported 00:08:44.891 Copy: Supported 00:08:44.891 Volatile Write Cache: Present 00:08:44.891 Atomic Write Unit (Normal): 1 00:08:44.891 Atomic Write Unit (PFail): 1 00:08:44.891 Atomic Compare & Write Unit: 1 00:08:44.891 Fused Compare & Write: Not Supported 00:08:44.891 Scatter-Gather List 00:08:44.891 SGL Command Set: Supported 00:08:44.891 SGL Keyed: Not Supported 00:08:44.891 SGL Bit Bucket Descriptor: Not Supported 00:08:44.891 SGL Metadata Pointer: Not Supported 00:08:44.891 Oversized SGL: Not Supported 00:08:44.891 SGL Metadata Address: Not Supported 00:08:44.891 SGL Offset: Not Supported 00:08:44.891 Transport SGL Data Block: Not Supported 00:08:44.891 Replay Protected Memory Block: Not Supported 00:08:44.891 00:08:44.891 Firmware Slot Information 00:08:44.891 ========================= 00:08:44.891 Active slot: 1 00:08:44.891 Slot 1 Firmware Revision: 1.0 00:08:44.891 00:08:44.891 00:08:44.891 Commands Supported and Effects 00:08:44.891 ============================== 00:08:44.891 Admin Commands 00:08:44.891 -------------- 00:08:44.891 Delete I/O Submission Queue (00h): Supported 00:08:44.891 Create I/O Submission Queue (01h): Supported 00:08:44.891 Get Log Page (02h): Supported 00:08:44.891 Delete I/O Completion Queue (04h): Supported 00:08:44.891 Create I/O Completion Queue (05h): Supported 00:08:44.891 Identify (06h): Supported 00:08:44.891 Abort (08h): Supported 00:08:44.891 Set Features (09h): Supported 00:08:44.891 Get Features (0Ah): Supported 00:08:44.891 Asynchronous Event Request (0Ch): Supported 00:08:44.891 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:44.891 Directive Send (19h): Supported 00:08:44.891 Directive Receive (1Ah): Supported 00:08:44.891 Virtualization Management (1Ch): Supported 00:08:44.892 Doorbell Buffer Config (7Ch): Supported 00:08:44.892 Format NVM (80h): Supported LBA-Change 00:08:44.892 I/O Commands 00:08:44.892 ------------ 00:08:44.892 Flush (00h): Supported LBA-Change 00:08:44.892 Write (01h): Supported LBA-Change 00:08:44.892 Read (02h): Supported 00:08:44.892 Compare (05h): Supported 00:08:44.892 Write Zeroes (08h): Supported LBA-Change 00:08:44.892 Dataset Management (09h): Supported LBA-Change 00:08:44.892 Unknown (0Ch): Supported 00:08:44.892 Unknown (12h): Supported 00:08:44.892 Copy (19h): Supported LBA-Change 00:08:44.892 Unknown (1Dh): Supported LBA-Change 00:08:44.892 00:08:44.892 Error Log 00:08:44.892 ========= 00:08:44.892 00:08:44.892 Arbitration 00:08:44.892 =========== 00:08:44.892 Arbitration Burst: no limit 00:08:44.892 00:08:44.892 Power Management 00:08:44.892 ================ 00:08:44.892 Number of Power States: 1 00:08:44.892 Current Power State: Power State #0 00:08:44.892 Power State #0: 00:08:44.892 Max Power: 25.00 W 00:08:44.892 Non-Operational State: Operational 00:08:44.892 Entry Latency: 16 microseconds 00:08:44.892 Exit Latency: 4 microseconds 00:08:44.892 Relative Read Throughput: 0 00:08:44.892 Relative Read Latency: 0 00:08:44.892 Relative Write Throughput: 0 00:08:44.892 Relative Write Latency: 0 00:08:44.892 Idle Power: Not Reported 00:08:44.892 Active Power: Not Reported 00:08:44.892 Non-Operational Permissive Mode: Not Supported 00:08:44.892 00:08:44.892 Health Information 00:08:44.892 ================== 00:08:44.892 Critical Warnings: 00:08:44.892 Available Spare Space: OK 00:08:44.892 Temperature: OK 00:08:44.892 Device Reliability: OK 00:08:44.892 Read Only: No 00:08:44.892 Volatile Memory Backup: OK 00:08:44.892 Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.892 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:44.892 Available Spare: 0% 00:08:44.892 Available Spare Threshold: 0% 00:08:44.892 Life Percentage Used: 0% 00:08:44.892 Data Units Read: 691 00:08:44.892 Data Units Written: 619 00:08:44.892 Host Read Commands: 39617 00:08:44.892 Host Write Commands: 39403 00:08:44.892 Controller Busy Time: 0 minutes 00:08:44.892 Power Cycles: 0 00:08:44.892 Power On Hours: 0 hours 00:08:44.892 Unsafe Shutdowns: 0 00:08:44.892 Unrecoverable Media Errors: 0 00:08:44.892 Lifetime Error Log Entries: 0 00:08:44.892 Warning Temperature Time: 0 minutes 00:08:44.892 Critical Temperature Time: 0 minutes 00:08:44.892 00:08:44.892 Number of Queues 00:08:44.892 ================ 00:08:44.892 Number of I/O Submission Queues: 64 00:08:44.892 Number of I/O Completion Queues: 64 00:08:44.892 00:08:44.892 ZNS Specific Controller Data 00:08:44.892 ============================ 00:08:44.892 Zone Append Size Limit: 0 00:08:44.892 00:08:44.892 00:08:44.892 Active Namespaces 00:08:44.892 ================= 00:08:44.892 Namespace ID:1 00:08:44.892 Error Recovery Timeout: Unlimited 00:08:44.892 Command Set Identifier: NVM (00h) 00:08:44.892 Deallocate: Supported 00:08:44.892 Deallocated/Unwritten Error: Supported 00:08:44.892 Deallocated Read Value: All 0x00 00:08:44.892 Deallocate in Write Zeroes: Not Supported 00:08:44.892 Deallocated Guard Field: 0xFFFF 00:08:44.892 Flush: Supported 00:08:44.892 Reservation: Not Supported 00:08:44.892 Metadata Transferred as: Separate Metadata Buffer 00:08:44.892 Namespace Sharing Capabilities: Private 00:08:44.892 Size (in LBAs): 1548666 (5GiB) 00:08:44.892 Capacity (in LBAs): 1548666 (5GiB) 00:08:44.892 Utilization (in LBAs): 1548666 (5GiB) 00:08:44.892 Thin Provisioning: Not Supported 00:08:44.892 Per-NS Atomic Units: No 00:08:44.892 Maximum Single Source Range Length: 128 00:08:44.892 Maximum Copy Length: 128 00:08:44.892 Maximum Source Range Count: 128 00:08:44.892 NGUID/EUI64 Never Reused: No 00:08:44.892 Namespace Write Protected: No 00:08:44.892 Number of LBA Formats: 8 00:08:44.892 Current LBA Format: LBA Format #07 00:08:44.892 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:44.892 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:44.892 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:44.892 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:44.892 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:44.892 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:44.892 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:44.892 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:44.892 00:08:44.892 NVM Specific Namespace Data 00:08:44.892 =========================== 00:08:44.892 Logical Block Storage Tag Mask: 0 00:08:44.892 Protection Information Capabilities: 00:08:44.892 16b Guard Protection Information Storage Tag Support: No 00:08:44.892 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:44.892 Storage Tag Check Read Support: No 00:08:44.892 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.892 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.892 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.892 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.892 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.892 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.892 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.892 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:44.892 03:59:32 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:44.892 03:59:32 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:45.153 ===================================================== 00:08:45.153 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:45.153 ===================================================== 00:08:45.153 Controller Capabilities/Features 00:08:45.153 ================================ 00:08:45.153 Vendor ID: 1b36 00:08:45.153 Subsystem Vendor ID: 1af4 00:08:45.153 Serial Number: 12341 00:08:45.153 Model Number: QEMU NVMe Ctrl 00:08:45.153 Firmware Version: 8.0.0 00:08:45.153 Recommended Arb Burst: 6 00:08:45.153 IEEE OUI Identifier: 00 54 52 00:08:45.153 Multi-path I/O 00:08:45.153 May have multiple subsystem ports: No 00:08:45.153 May have multiple controllers: No 00:08:45.153 Associated with SR-IOV VF: No 00:08:45.153 Max Data Transfer Size: 524288 00:08:45.153 Max Number of Namespaces: 256 00:08:45.153 Max Number of I/O Queues: 64 00:08:45.153 NVMe Specification Version (VS): 1.4 00:08:45.153 NVMe Specification Version (Identify): 1.4 00:08:45.153 Maximum Queue Entries: 2048 00:08:45.153 Contiguous Queues Required: Yes 00:08:45.153 Arbitration Mechanisms Supported 00:08:45.153 Weighted Round Robin: Not Supported 00:08:45.153 Vendor Specific: Not Supported 00:08:45.153 Reset Timeout: 7500 ms 00:08:45.153 Doorbell Stride: 4 bytes 00:08:45.153 NVM Subsystem Reset: Not Supported 00:08:45.153 Command Sets Supported 00:08:45.153 NVM Command Set: Supported 00:08:45.153 Boot Partition: Not Supported 00:08:45.153 Memory Page Size Minimum: 4096 bytes 00:08:45.153 Memory Page Size Maximum: 65536 bytes 00:08:45.153 Persistent Memory Region: Not Supported 00:08:45.153 Optional Asynchronous Events Supported 00:08:45.153 Namespace Attribute Notices: Supported 00:08:45.153 Firmware Activation Notices: Not Supported 00:08:45.153 ANA Change Notices: Not Supported 00:08:45.153 PLE Aggregate Log Change Notices: Not Supported 00:08:45.153 LBA Status Info Alert Notices: Not Supported 00:08:45.153 EGE Aggregate Log Change Notices: Not Supported 00:08:45.153 Normal NVM Subsystem Shutdown event: Not Supported 00:08:45.153 Zone Descriptor Change Notices: Not Supported 00:08:45.153 Discovery Log Change Notices: Not Supported 00:08:45.153 Controller Attributes 00:08:45.153 128-bit Host Identifier: Not Supported 00:08:45.153 Non-Operational Permissive Mode: Not Supported 00:08:45.153 NVM Sets: Not Supported 00:08:45.153 Read Recovery Levels: Not Supported 00:08:45.153 Endurance Groups: Not Supported 00:08:45.153 Predictable Latency Mode: Not Supported 00:08:45.153 Traffic Based Keep ALive: Not Supported 00:08:45.153 Namespace Granularity: Not Supported 00:08:45.153 SQ Associations: Not Supported 00:08:45.153 UUID List: Not Supported 00:08:45.153 Multi-Domain Subsystem: Not Supported 00:08:45.153 Fixed Capacity Management: Not Supported 00:08:45.153 Variable Capacity Management: Not Supported 00:08:45.153 Delete Endurance Group: Not Supported 00:08:45.153 Delete NVM Set: Not Supported 00:08:45.153 Extended LBA Formats Supported: Supported 00:08:45.153 Flexible Data Placement Supported: Not Supported 00:08:45.153 00:08:45.153 Controller Memory Buffer Support 00:08:45.153 ================================ 00:08:45.153 Supported: No 00:08:45.153 00:08:45.153 Persistent Memory Region Support 00:08:45.153 ================================ 00:08:45.153 Supported: No 00:08:45.153 00:08:45.153 Admin Command Set Attributes 00:08:45.153 ============================ 00:08:45.153 Security Send/Receive: Not Supported 00:08:45.153 Format NVM: Supported 00:08:45.153 Firmware Activate/Download: Not Supported 00:08:45.153 Namespace Management: Supported 00:08:45.153 Device Self-Test: Not Supported 00:08:45.153 Directives: Supported 00:08:45.153 NVMe-MI: Not Supported 00:08:45.153 Virtualization Management: Not Supported 00:08:45.153 Doorbell Buffer Config: Supported 00:08:45.153 Get LBA Status Capability: Not Supported 00:08:45.153 Command & Feature Lockdown Capability: Not Supported 00:08:45.153 Abort Command Limit: 4 00:08:45.153 Async Event Request Limit: 4 00:08:45.153 Number of Firmware Slots: N/A 00:08:45.153 Firmware Slot 1 Read-Only: N/A 00:08:45.153 Firmware Activation Without Reset: N/A 00:08:45.153 Multiple Update Detection Support: N/A 00:08:45.153 Firmware Update Granularity: No Information Provided 00:08:45.153 Per-Namespace SMART Log: Yes 00:08:45.153 Asymmetric Namespace Access Log Page: Not Supported 00:08:45.153 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:45.153 Command Effects Log Page: Supported 00:08:45.153 Get Log Page Extended Data: Supported 00:08:45.153 Telemetry Log Pages: Not Supported 00:08:45.153 Persistent Event Log Pages: Not Supported 00:08:45.153 Supported Log Pages Log Page: May Support 00:08:45.153 Commands Supported & Effects Log Page: Not Supported 00:08:45.153 Feature Identifiers & Effects Log Page:May Support 00:08:45.153 NVMe-MI Commands & Effects Log Page: May Support 00:08:45.153 Data Area 4 for Telemetry Log: Not Supported 00:08:45.154 Error Log Page Entries Supported: 1 00:08:45.154 Keep Alive: Not Supported 00:08:45.154 00:08:45.154 NVM Command Set Attributes 00:08:45.154 ========================== 00:08:45.154 Submission Queue Entry Size 00:08:45.154 Max: 64 00:08:45.154 Min: 64 00:08:45.154 Completion Queue Entry Size 00:08:45.154 Max: 16 00:08:45.154 Min: 16 00:08:45.154 Number of Namespaces: 256 00:08:45.154 Compare Command: Supported 00:08:45.154 Write Uncorrectable Command: Not Supported 00:08:45.154 Dataset Management Command: Supported 00:08:45.154 Write Zeroes Command: Supported 00:08:45.154 Set Features Save Field: Supported 00:08:45.154 Reservations: Not Supported 00:08:45.154 Timestamp: Supported 00:08:45.154 Copy: Supported 00:08:45.154 Volatile Write Cache: Present 00:08:45.154 Atomic Write Unit (Normal): 1 00:08:45.154 Atomic Write Unit (PFail): 1 00:08:45.154 Atomic Compare & Write Unit: 1 00:08:45.154 Fused Compare & Write: Not Supported 00:08:45.154 Scatter-Gather List 00:08:45.154 SGL Command Set: Supported 00:08:45.154 SGL Keyed: Not Supported 00:08:45.154 SGL Bit Bucket Descriptor: Not Supported 00:08:45.154 SGL Metadata Pointer: Not Supported 00:08:45.154 Oversized SGL: Not Supported 00:08:45.154 SGL Metadata Address: Not Supported 00:08:45.154 SGL Offset: Not Supported 00:08:45.154 Transport SGL Data Block: Not Supported 00:08:45.154 Replay Protected Memory Block: Not Supported 00:08:45.154 00:08:45.154 Firmware Slot Information 00:08:45.154 ========================= 00:08:45.154 Active slot: 1 00:08:45.154 Slot 1 Firmware Revision: 1.0 00:08:45.154 00:08:45.154 00:08:45.154 Commands Supported and Effects 00:08:45.154 ============================== 00:08:45.154 Admin Commands 00:08:45.154 -------------- 00:08:45.154 Delete I/O Submission Queue (00h): Supported 00:08:45.154 Create I/O Submission Queue (01h): Supported 00:08:45.154 Get Log Page (02h): Supported 00:08:45.154 Delete I/O Completion Queue (04h): Supported 00:08:45.154 Create I/O Completion Queue (05h): Supported 00:08:45.154 Identify (06h): Supported 00:08:45.154 Abort (08h): Supported 00:08:45.154 Set Features (09h): Supported 00:08:45.154 Get Features (0Ah): Supported 00:08:45.154 Asynchronous Event Request (0Ch): Supported 00:08:45.154 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:45.154 Directive Send (19h): Supported 00:08:45.154 Directive Receive (1Ah): Supported 00:08:45.154 Virtualization Management (1Ch): Supported 00:08:45.154 Doorbell Buffer Config (7Ch): Supported 00:08:45.154 Format NVM (80h): Supported LBA-Change 00:08:45.154 I/O Commands 00:08:45.154 ------------ 00:08:45.154 Flush (00h): Supported LBA-Change 00:08:45.154 Write (01h): Supported LBA-Change 00:08:45.154 Read (02h): Supported 00:08:45.154 Compare (05h): Supported 00:08:45.154 Write Zeroes (08h): Supported LBA-Change 00:08:45.154 Dataset Management (09h): Supported LBA-Change 00:08:45.154 Unknown (0Ch): Supported 00:08:45.154 Unknown (12h): Supported 00:08:45.154 Copy (19h): Supported LBA-Change 00:08:45.154 Unknown (1Dh): Supported LBA-Change 00:08:45.154 00:08:45.154 Error Log 00:08:45.154 ========= 00:08:45.154 00:08:45.154 Arbitration 00:08:45.154 =========== 00:08:45.154 Arbitration Burst: no limit 00:08:45.154 00:08:45.154 Power Management 00:08:45.154 ================ 00:08:45.154 Number of Power States: 1 00:08:45.154 Current Power State: Power State #0 00:08:45.154 Power State #0: 00:08:45.154 Max Power: 25.00 W 00:08:45.154 Non-Operational State: Operational 00:08:45.154 Entry Latency: 16 microseconds 00:08:45.154 Exit Latency: 4 microseconds 00:08:45.154 Relative Read Throughput: 0 00:08:45.154 Relative Read Latency: 0 00:08:45.154 Relative Write Throughput: 0 00:08:45.154 Relative Write Latency: 0 00:08:45.154 Idle Power: Not Reported 00:08:45.154 Active Power: Not Reported 00:08:45.154 Non-Operational Permissive Mode: Not Supported 00:08:45.154 00:08:45.154 Health Information 00:08:45.154 ================== 00:08:45.154 Critical Warnings: 00:08:45.154 Available Spare Space: OK 00:08:45.154 Temperature: OK 00:08:45.154 Device Reliability: OK 00:08:45.154 Read Only: No 00:08:45.154 Volatile Memory Backup: OK 00:08:45.154 Current Temperature: 323 Kelvin (50 Celsius) 00:08:45.154 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:45.154 Available Spare: 0% 00:08:45.154 Available Spare Threshold: 0% 00:08:45.154 Life Percentage Used: 0% 00:08:45.154 Data Units Read: 1059 00:08:45.154 Data Units Written: 926 00:08:45.154 Host Read Commands: 58259 00:08:45.154 Host Write Commands: 57042 00:08:45.154 Controller Busy Time: 0 minutes 00:08:45.154 Power Cycles: 0 00:08:45.154 Power On Hours: 0 hours 00:08:45.154 Unsafe Shutdowns: 0 00:08:45.154 Unrecoverable Media Errors: 0 00:08:45.154 Lifetime Error Log Entries: 0 00:08:45.154 Warning Temperature Time: 0 minutes 00:08:45.154 Critical Temperature Time: 0 minutes 00:08:45.154 00:08:45.154 Number of Queues 00:08:45.154 ================ 00:08:45.154 Number of I/O Submission Queues: 64 00:08:45.154 Number of I/O Completion Queues: 64 00:08:45.154 00:08:45.154 ZNS Specific Controller Data 00:08:45.154 ============================ 00:08:45.154 Zone Append Size Limit: 0 00:08:45.154 00:08:45.154 00:08:45.154 Active Namespaces 00:08:45.154 ================= 00:08:45.154 Namespace ID:1 00:08:45.154 Error Recovery Timeout: Unlimited 00:08:45.154 Command Set Identifier: NVM (00h) 00:08:45.154 Deallocate: Supported 00:08:45.154 Deallocated/Unwritten Error: Supported 00:08:45.154 Deallocated Read Value: All 0x00 00:08:45.154 Deallocate in Write Zeroes: Not Supported 00:08:45.154 Deallocated Guard Field: 0xFFFF 00:08:45.154 Flush: Supported 00:08:45.154 Reservation: Not Supported 00:08:45.154 Namespace Sharing Capabilities: Private 00:08:45.154 Size (in LBAs): 1310720 (5GiB) 00:08:45.154 Capacity (in LBAs): 1310720 (5GiB) 00:08:45.154 Utilization (in LBAs): 1310720 (5GiB) 00:08:45.154 Thin Provisioning: Not Supported 00:08:45.154 Per-NS Atomic Units: No 00:08:45.154 Maximum Single Source Range Length: 128 00:08:45.154 Maximum Copy Length: 128 00:08:45.154 Maximum Source Range Count: 128 00:08:45.154 NGUID/EUI64 Never Reused: No 00:08:45.154 Namespace Write Protected: No 00:08:45.154 Number of LBA Formats: 8 00:08:45.154 Current LBA Format: LBA Format #04 00:08:45.154 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:45.154 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:45.154 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:45.154 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:45.154 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:45.154 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:45.154 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:45.154 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:45.154 00:08:45.154 NVM Specific Namespace Data 00:08:45.154 =========================== 00:08:45.154 Logical Block Storage Tag Mask: 0 00:08:45.154 Protection Information Capabilities: 00:08:45.154 16b Guard Protection Information Storage Tag Support: No 00:08:45.154 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:45.154 Storage Tag Check Read Support: No 00:08:45.154 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.154 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.154 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.154 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.154 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.154 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.154 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.154 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.154 03:59:32 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:45.154 03:59:32 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:45.154 ===================================================== 00:08:45.154 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:45.154 ===================================================== 00:08:45.154 Controller Capabilities/Features 00:08:45.154 ================================ 00:08:45.154 Vendor ID: 1b36 00:08:45.154 Subsystem Vendor ID: 1af4 00:08:45.154 Serial Number: 12342 00:08:45.154 Model Number: QEMU NVMe Ctrl 00:08:45.154 Firmware Version: 8.0.0 00:08:45.154 Recommended Arb Burst: 6 00:08:45.154 IEEE OUI Identifier: 00 54 52 00:08:45.154 Multi-path I/O 00:08:45.154 May have multiple subsystem ports: No 00:08:45.154 May have multiple controllers: No 00:08:45.154 Associated with SR-IOV VF: No 00:08:45.154 Max Data Transfer Size: 524288 00:08:45.154 Max Number of Namespaces: 256 00:08:45.154 Max Number of I/O Queues: 64 00:08:45.154 NVMe Specification Version (VS): 1.4 00:08:45.154 NVMe Specification Version (Identify): 1.4 00:08:45.154 Maximum Queue Entries: 2048 00:08:45.154 Contiguous Queues Required: Yes 00:08:45.154 Arbitration Mechanisms Supported 00:08:45.154 Weighted Round Robin: Not Supported 00:08:45.154 Vendor Specific: Not Supported 00:08:45.154 Reset Timeout: 7500 ms 00:08:45.154 Doorbell Stride: 4 bytes 00:08:45.154 NVM Subsystem Reset: Not Supported 00:08:45.154 Command Sets Supported 00:08:45.154 NVM Command Set: Supported 00:08:45.154 Boot Partition: Not Supported 00:08:45.154 Memory Page Size Minimum: 4096 bytes 00:08:45.154 Memory Page Size Maximum: 65536 bytes 00:08:45.154 Persistent Memory Region: Not Supported 00:08:45.154 Optional Asynchronous Events Supported 00:08:45.154 Namespace Attribute Notices: Supported 00:08:45.154 Firmware Activation Notices: Not Supported 00:08:45.154 ANA Change Notices: Not Supported 00:08:45.154 PLE Aggregate Log Change Notices: Not Supported 00:08:45.154 LBA Status Info Alert Notices: Not Supported 00:08:45.154 EGE Aggregate Log Change Notices: Not Supported 00:08:45.154 Normal NVM Subsystem Shutdown event: Not Supported 00:08:45.154 Zone Descriptor Change Notices: Not Supported 00:08:45.154 Discovery Log Change Notices: Not Supported 00:08:45.154 Controller Attributes 00:08:45.154 128-bit Host Identifier: Not Supported 00:08:45.154 Non-Operational Permissive Mode: Not Supported 00:08:45.154 NVM Sets: Not Supported 00:08:45.154 Read Recovery Levels: Not Supported 00:08:45.154 Endurance Groups: Not Supported 00:08:45.154 Predictable Latency Mode: Not Supported 00:08:45.154 Traffic Based Keep ALive: Not Supported 00:08:45.154 Namespace Granularity: Not Supported 00:08:45.154 SQ Associations: Not Supported 00:08:45.154 UUID List: Not Supported 00:08:45.154 Multi-Domain Subsystem: Not Supported 00:08:45.154 Fixed Capacity Management: Not Supported 00:08:45.154 Variable Capacity Management: Not Supported 00:08:45.154 Delete Endurance Group: Not Supported 00:08:45.154 Delete NVM Set: Not Supported 00:08:45.154 Extended LBA Formats Supported: Supported 00:08:45.154 Flexible Data Placement Supported: Not Supported 00:08:45.154 00:08:45.154 Controller Memory Buffer Support 00:08:45.154 ================================ 00:08:45.154 Supported: No 00:08:45.154 00:08:45.154 Persistent Memory Region Support 00:08:45.154 ================================ 00:08:45.154 Supported: No 00:08:45.154 00:08:45.154 Admin Command Set Attributes 00:08:45.154 ============================ 00:08:45.154 Security Send/Receive: Not Supported 00:08:45.154 Format NVM: Supported 00:08:45.154 Firmware Activate/Download: Not Supported 00:08:45.154 Namespace Management: Supported 00:08:45.154 Device Self-Test: Not Supported 00:08:45.154 Directives: Supported 00:08:45.154 NVMe-MI: Not Supported 00:08:45.154 Virtualization Management: Not Supported 00:08:45.154 Doorbell Buffer Config: Supported 00:08:45.154 Get LBA Status Capability: Not Supported 00:08:45.154 Command & Feature Lockdown Capability: Not Supported 00:08:45.154 Abort Command Limit: 4 00:08:45.154 Async Event Request Limit: 4 00:08:45.154 Number of Firmware Slots: N/A 00:08:45.154 Firmware Slot 1 Read-Only: N/A 00:08:45.154 Firmware Activation Without Reset: N/A 00:08:45.154 Multiple Update Detection Support: N/A 00:08:45.154 Firmware Update Granularity: No Information Provided 00:08:45.154 Per-Namespace SMART Log: Yes 00:08:45.154 Asymmetric Namespace Access Log Page: Not Supported 00:08:45.154 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:45.154 Command Effects Log Page: Supported 00:08:45.154 Get Log Page Extended Data: Supported 00:08:45.154 Telemetry Log Pages: Not Supported 00:08:45.154 Persistent Event Log Pages: Not Supported 00:08:45.154 Supported Log Pages Log Page: May Support 00:08:45.154 Commands Supported & Effects Log Page: Not Supported 00:08:45.154 Feature Identifiers & Effects Log Page:May Support 00:08:45.154 NVMe-MI Commands & Effects Log Page: May Support 00:08:45.154 Data Area 4 for Telemetry Log: Not Supported 00:08:45.154 Error Log Page Entries Supported: 1 00:08:45.154 Keep Alive: Not Supported 00:08:45.154 00:08:45.154 NVM Command Set Attributes 00:08:45.154 ========================== 00:08:45.154 Submission Queue Entry Size 00:08:45.154 Max: 64 00:08:45.154 Min: 64 00:08:45.154 Completion Queue Entry Size 00:08:45.154 Max: 16 00:08:45.154 Min: 16 00:08:45.154 Number of Namespaces: 256 00:08:45.154 Compare Command: Supported 00:08:45.154 Write Uncorrectable Command: Not Supported 00:08:45.154 Dataset Management Command: Supported 00:08:45.154 Write Zeroes Command: Supported 00:08:45.154 Set Features Save Field: Supported 00:08:45.154 Reservations: Not Supported 00:08:45.154 Timestamp: Supported 00:08:45.154 Copy: Supported 00:08:45.154 Volatile Write Cache: Present 00:08:45.154 Atomic Write Unit (Normal): 1 00:08:45.154 Atomic Write Unit (PFail): 1 00:08:45.154 Atomic Compare & Write Unit: 1 00:08:45.154 Fused Compare & Write: Not Supported 00:08:45.155 Scatter-Gather List 00:08:45.155 SGL Command Set: Supported 00:08:45.155 SGL Keyed: Not Supported 00:08:45.155 SGL Bit Bucket Descriptor: Not Supported 00:08:45.155 SGL Metadata Pointer: Not Supported 00:08:45.155 Oversized SGL: Not Supported 00:08:45.155 SGL Metadata Address: Not Supported 00:08:45.155 SGL Offset: Not Supported 00:08:45.155 Transport SGL Data Block: Not Supported 00:08:45.155 Replay Protected Memory Block: Not Supported 00:08:45.155 00:08:45.155 Firmware Slot Information 00:08:45.155 ========================= 00:08:45.155 Active slot: 1 00:08:45.155 Slot 1 Firmware Revision: 1.0 00:08:45.155 00:08:45.155 00:08:45.155 Commands Supported and Effects 00:08:45.155 ============================== 00:08:45.155 Admin Commands 00:08:45.155 -------------- 00:08:45.155 Delete I/O Submission Queue (00h): Supported 00:08:45.155 Create I/O Submission Queue (01h): Supported 00:08:45.155 Get Log Page (02h): Supported 00:08:45.155 Delete I/O Completion Queue (04h): Supported 00:08:45.155 Create I/O Completion Queue (05h): Supported 00:08:45.155 Identify (06h): Supported 00:08:45.155 Abort (08h): Supported 00:08:45.155 Set Features (09h): Supported 00:08:45.155 Get Features (0Ah): Supported 00:08:45.155 Asynchronous Event Request (0Ch): Supported 00:08:45.155 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:45.155 Directive Send (19h): Supported 00:08:45.155 Directive Receive (1Ah): Supported 00:08:45.155 Virtualization Management (1Ch): Supported 00:08:45.155 Doorbell Buffer Config (7Ch): Supported 00:08:45.155 Format NVM (80h): Supported LBA-Change 00:08:45.155 I/O Commands 00:08:45.155 ------------ 00:08:45.155 Flush (00h): Supported LBA-Change 00:08:45.155 Write (01h): Supported LBA-Change 00:08:45.155 Read (02h): Supported 00:08:45.155 Compare (05h): Supported 00:08:45.155 Write Zeroes (08h): Supported LBA-Change 00:08:45.155 Dataset Management (09h): Supported LBA-Change 00:08:45.155 Unknown (0Ch): Supported 00:08:45.155 Unknown (12h): Supported 00:08:45.155 Copy (19h): Supported LBA-Change 00:08:45.155 Unknown (1Dh): Supported LBA-Change 00:08:45.155 00:08:45.155 Error Log 00:08:45.155 ========= 00:08:45.155 00:08:45.155 Arbitration 00:08:45.155 =========== 00:08:45.155 Arbitration Burst: no limit 00:08:45.155 00:08:45.155 Power Management 00:08:45.155 ================ 00:08:45.155 Number of Power States: 1 00:08:45.155 Current Power State: Power State #0 00:08:45.155 Power State #0: 00:08:45.155 Max Power: 25.00 W 00:08:45.155 Non-Operational State: Operational 00:08:45.155 Entry Latency: 16 microseconds 00:08:45.155 Exit Latency: 4 microseconds 00:08:45.155 Relative Read Throughput: 0 00:08:45.155 Relative Read Latency: 0 00:08:45.155 Relative Write Throughput: 0 00:08:45.155 Relative Write Latency: 0 00:08:45.155 Idle Power: Not Reported 00:08:45.155 Active Power: Not Reported 00:08:45.155 Non-Operational Permissive Mode: Not Supported 00:08:45.155 00:08:45.155 Health Information 00:08:45.155 ================== 00:08:45.155 Critical Warnings: 00:08:45.155 Available Spare Space: OK 00:08:45.155 Temperature: OK 00:08:45.155 Device Reliability: OK 00:08:45.155 Read Only: No 00:08:45.155 Volatile Memory Backup: OK 00:08:45.155 Current Temperature: 323 Kelvin (50 Celsius) 00:08:45.155 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:45.155 Available Spare: 0% 00:08:45.155 Available Spare Threshold: 0% 00:08:45.155 Life Percentage Used: 0% 00:08:45.155 Data Units Read: 2175 00:08:45.155 Data Units Written: 1962 00:08:45.155 Host Read Commands: 120372 00:08:45.155 Host Write Commands: 118642 00:08:45.155 Controller Busy Time: 0 minutes 00:08:45.155 Power Cycles: 0 00:08:45.155 Power On Hours: 0 hours 00:08:45.155 Unsafe Shutdowns: 0 00:08:45.155 Unrecoverable Media Errors: 0 00:08:45.155 Lifetime Error Log Entries: 0 00:08:45.155 Warning Temperature Time: 0 minutes 00:08:45.155 Critical Temperature Time: 0 minutes 00:08:45.155 00:08:45.155 Number of Queues 00:08:45.155 ================ 00:08:45.155 Number of I/O Submission Queues: 64 00:08:45.155 Number of I/O Completion Queues: 64 00:08:45.155 00:08:45.155 ZNS Specific Controller Data 00:08:45.155 ============================ 00:08:45.155 Zone Append Size Limit: 0 00:08:45.155 00:08:45.155 00:08:45.155 Active Namespaces 00:08:45.155 ================= 00:08:45.155 Namespace ID:1 00:08:45.155 Error Recovery Timeout: Unlimited 00:08:45.155 Command Set Identifier: NVM (00h) 00:08:45.155 Deallocate: Supported 00:08:45.155 Deallocated/Unwritten Error: Supported 00:08:45.155 Deallocated Read Value: All 0x00 00:08:45.155 Deallocate in Write Zeroes: Not Supported 00:08:45.155 Deallocated Guard Field: 0xFFFF 00:08:45.155 Flush: Supported 00:08:45.155 Reservation: Not Supported 00:08:45.155 Namespace Sharing Capabilities: Private 00:08:45.155 Size (in LBAs): 1048576 (4GiB) 00:08:45.155 Capacity (in LBAs): 1048576 (4GiB) 00:08:45.155 Utilization (in LBAs): 1048576 (4GiB) 00:08:45.155 Thin Provisioning: Not Supported 00:08:45.155 Per-NS Atomic Units: No 00:08:45.155 Maximum Single Source Range Length: 128 00:08:45.155 Maximum Copy Length: 128 00:08:45.155 Maximum Source Range Count: 128 00:08:45.155 NGUID/EUI64 Never Reused: No 00:08:45.155 Namespace Write Protected: No 00:08:45.155 Number of LBA Formats: 8 00:08:45.155 Current LBA Format: LBA Format #04 00:08:45.155 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:45.155 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:45.155 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:45.155 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:45.155 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:45.155 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:45.155 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:45.155 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:45.155 00:08:45.155 NVM Specific Namespace Data 00:08:45.155 =========================== 00:08:45.155 Logical Block Storage Tag Mask: 0 00:08:45.155 Protection Information Capabilities: 00:08:45.155 16b Guard Protection Information Storage Tag Support: No 00:08:45.155 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:45.155 Storage Tag Check Read Support: No 00:08:45.155 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Namespace ID:2 00:08:45.155 Error Recovery Timeout: Unlimited 00:08:45.155 Command Set Identifier: NVM (00h) 00:08:45.155 Deallocate: Supported 00:08:45.155 Deallocated/Unwritten Error: Supported 00:08:45.155 Deallocated Read Value: All 0x00 00:08:45.155 Deallocate in Write Zeroes: Not Supported 00:08:45.155 Deallocated Guard Field: 0xFFFF 00:08:45.155 Flush: Supported 00:08:45.155 Reservation: Not Supported 00:08:45.155 Namespace Sharing Capabilities: Private 00:08:45.155 Size (in LBAs): 1048576 (4GiB) 00:08:45.155 Capacity (in LBAs): 1048576 (4GiB) 00:08:45.155 Utilization (in LBAs): 1048576 (4GiB) 00:08:45.155 Thin Provisioning: Not Supported 00:08:45.155 Per-NS Atomic Units: No 00:08:45.155 Maximum Single Source Range Length: 128 00:08:45.155 Maximum Copy Length: 128 00:08:45.155 Maximum Source Range Count: 128 00:08:45.155 NGUID/EUI64 Never Reused: No 00:08:45.155 Namespace Write Protected: No 00:08:45.155 Number of LBA Formats: 8 00:08:45.155 Current LBA Format: LBA Format #04 00:08:45.155 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:45.155 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:45.155 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:45.155 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:45.155 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:45.155 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:45.155 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:45.155 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:45.155 00:08:45.155 NVM Specific Namespace Data 00:08:45.155 =========================== 00:08:45.155 Logical Block Storage Tag Mask: 0 00:08:45.155 Protection Information Capabilities: 00:08:45.155 16b Guard Protection Information Storage Tag Support: No 00:08:45.155 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:45.155 Storage Tag Check Read Support: No 00:08:45.155 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Namespace ID:3 00:08:45.155 Error Recovery Timeout: Unlimited 00:08:45.155 Command Set Identifier: NVM (00h) 00:08:45.155 Deallocate: Supported 00:08:45.155 Deallocated/Unwritten Error: Supported 00:08:45.155 Deallocated Read Value: All 0x00 00:08:45.155 Deallocate in Write Zeroes: Not Supported 00:08:45.155 Deallocated Guard Field: 0xFFFF 00:08:45.155 Flush: Supported 00:08:45.155 Reservation: Not Supported 00:08:45.155 Namespace Sharing Capabilities: Private 00:08:45.155 Size (in LBAs): 1048576 (4GiB) 00:08:45.155 Capacity (in LBAs): 1048576 (4GiB) 00:08:45.155 Utilization (in LBAs): 1048576 (4GiB) 00:08:45.155 Thin Provisioning: Not Supported 00:08:45.155 Per-NS Atomic Units: No 00:08:45.155 Maximum Single Source Range Length: 128 00:08:45.155 Maximum Copy Length: 128 00:08:45.155 Maximum Source Range Count: 128 00:08:45.155 NGUID/EUI64 Never Reused: No 00:08:45.155 Namespace Write Protected: No 00:08:45.155 Number of LBA Formats: 8 00:08:45.155 Current LBA Format: LBA Format #04 00:08:45.155 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:45.155 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:45.155 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:45.155 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:45.155 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:45.155 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:45.155 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:45.155 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:45.155 00:08:45.155 NVM Specific Namespace Data 00:08:45.155 =========================== 00:08:45.155 Logical Block Storage Tag Mask: 0 00:08:45.155 Protection Information Capabilities: 00:08:45.155 16b Guard Protection Information Storage Tag Support: No 00:08:45.155 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:45.155 Storage Tag Check Read Support: No 00:08:45.155 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.155 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.417 03:59:32 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:45.417 03:59:32 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:45.417 ===================================================== 00:08:45.417 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:45.417 ===================================================== 00:08:45.417 Controller Capabilities/Features 00:08:45.417 ================================ 00:08:45.417 Vendor ID: 1b36 00:08:45.417 Subsystem Vendor ID: 1af4 00:08:45.417 Serial Number: 12343 00:08:45.417 Model Number: QEMU NVMe Ctrl 00:08:45.417 Firmware Version: 8.0.0 00:08:45.417 Recommended Arb Burst: 6 00:08:45.417 IEEE OUI Identifier: 00 54 52 00:08:45.417 Multi-path I/O 00:08:45.417 May have multiple subsystem ports: No 00:08:45.417 May have multiple controllers: Yes 00:08:45.417 Associated with SR-IOV VF: No 00:08:45.417 Max Data Transfer Size: 524288 00:08:45.417 Max Number of Namespaces: 256 00:08:45.417 Max Number of I/O Queues: 64 00:08:45.417 NVMe Specification Version (VS): 1.4 00:08:45.417 NVMe Specification Version (Identify): 1.4 00:08:45.417 Maximum Queue Entries: 2048 00:08:45.417 Contiguous Queues Required: Yes 00:08:45.417 Arbitration Mechanisms Supported 00:08:45.417 Weighted Round Robin: Not Supported 00:08:45.417 Vendor Specific: Not Supported 00:08:45.417 Reset Timeout: 7500 ms 00:08:45.417 Doorbell Stride: 4 bytes 00:08:45.417 NVM Subsystem Reset: Not Supported 00:08:45.417 Command Sets Supported 00:08:45.417 NVM Command Set: Supported 00:08:45.417 Boot Partition: Not Supported 00:08:45.417 Memory Page Size Minimum: 4096 bytes 00:08:45.417 Memory Page Size Maximum: 65536 bytes 00:08:45.417 Persistent Memory Region: Not Supported 00:08:45.417 Optional Asynchronous Events Supported 00:08:45.417 Namespace Attribute Notices: Supported 00:08:45.417 Firmware Activation Notices: Not Supported 00:08:45.417 ANA Change Notices: Not Supported 00:08:45.417 PLE Aggregate Log Change Notices: Not Supported 00:08:45.417 LBA Status Info Alert Notices: Not Supported 00:08:45.417 EGE Aggregate Log Change Notices: Not Supported 00:08:45.417 Normal NVM Subsystem Shutdown event: Not Supported 00:08:45.417 Zone Descriptor Change Notices: Not Supported 00:08:45.417 Discovery Log Change Notices: Not Supported 00:08:45.417 Controller Attributes 00:08:45.417 128-bit Host Identifier: Not Supported 00:08:45.417 Non-Operational Permissive Mode: Not Supported 00:08:45.417 NVM Sets: Not Supported 00:08:45.417 Read Recovery Levels: Not Supported 00:08:45.417 Endurance Groups: Supported 00:08:45.417 Predictable Latency Mode: Not Supported 00:08:45.417 Traffic Based Keep ALive: Not Supported 00:08:45.417 Namespace Granularity: Not Supported 00:08:45.417 SQ Associations: Not Supported 00:08:45.417 UUID List: Not Supported 00:08:45.417 Multi-Domain Subsystem: Not Supported 00:08:45.417 Fixed Capacity Management: Not Supported 00:08:45.417 Variable Capacity Management: Not Supported 00:08:45.417 Delete Endurance Group: Not Supported 00:08:45.417 Delete NVM Set: Not Supported 00:08:45.417 Extended LBA Formats Supported: Supported 00:08:45.417 Flexible Data Placement Supported: Supported 00:08:45.417 00:08:45.417 Controller Memory Buffer Support 00:08:45.417 ================================ 00:08:45.417 Supported: No 00:08:45.417 00:08:45.417 Persistent Memory Region Support 00:08:45.417 ================================ 00:08:45.417 Supported: No 00:08:45.417 00:08:45.417 Admin Command Set Attributes 00:08:45.417 ============================ 00:08:45.417 Security Send/Receive: Not Supported 00:08:45.417 Format NVM: Supported 00:08:45.417 Firmware Activate/Download: Not Supported 00:08:45.417 Namespace Management: Supported 00:08:45.417 Device Self-Test: Not Supported 00:08:45.417 Directives: Supported 00:08:45.417 NVMe-MI: Not Supported 00:08:45.417 Virtualization Management: Not Supported 00:08:45.417 Doorbell Buffer Config: Supported 00:08:45.417 Get LBA Status Capability: Not Supported 00:08:45.417 Command & Feature Lockdown Capability: Not Supported 00:08:45.417 Abort Command Limit: 4 00:08:45.417 Async Event Request Limit: 4 00:08:45.417 Number of Firmware Slots: N/A 00:08:45.417 Firmware Slot 1 Read-Only: N/A 00:08:45.417 Firmware Activation Without Reset: N/A 00:08:45.417 Multiple Update Detection Support: N/A 00:08:45.417 Firmware Update Granularity: No Information Provided 00:08:45.417 Per-Namespace SMART Log: Yes 00:08:45.417 Asymmetric Namespace Access Log Page: Not Supported 00:08:45.417 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:45.417 Command Effects Log Page: Supported 00:08:45.417 Get Log Page Extended Data: Supported 00:08:45.417 Telemetry Log Pages: Not Supported 00:08:45.417 Persistent Event Log Pages: Not Supported 00:08:45.417 Supported Log Pages Log Page: May Support 00:08:45.417 Commands Supported & Effects Log Page: Not Supported 00:08:45.417 Feature Identifiers & Effects Log Page:May Support 00:08:45.417 NVMe-MI Commands & Effects Log Page: May Support 00:08:45.417 Data Area 4 for Telemetry Log: Not Supported 00:08:45.417 Error Log Page Entries Supported: 1 00:08:45.417 Keep Alive: Not Supported 00:08:45.417 00:08:45.417 NVM Command Set Attributes 00:08:45.417 ========================== 00:08:45.417 Submission Queue Entry Size 00:08:45.417 Max: 64 00:08:45.417 Min: 64 00:08:45.417 Completion Queue Entry Size 00:08:45.417 Max: 16 00:08:45.417 Min: 16 00:08:45.417 Number of Namespaces: 256 00:08:45.417 Compare Command: Supported 00:08:45.417 Write Uncorrectable Command: Not Supported 00:08:45.417 Dataset Management Command: Supported 00:08:45.417 Write Zeroes Command: Supported 00:08:45.417 Set Features Save Field: Supported 00:08:45.417 Reservations: Not Supported 00:08:45.417 Timestamp: Supported 00:08:45.417 Copy: Supported 00:08:45.417 Volatile Write Cache: Present 00:08:45.417 Atomic Write Unit (Normal): 1 00:08:45.417 Atomic Write Unit (PFail): 1 00:08:45.417 Atomic Compare & Write Unit: 1 00:08:45.417 Fused Compare & Write: Not Supported 00:08:45.417 Scatter-Gather List 00:08:45.417 SGL Command Set: Supported 00:08:45.417 SGL Keyed: Not Supported 00:08:45.417 SGL Bit Bucket Descriptor: Not Supported 00:08:45.417 SGL Metadata Pointer: Not Supported 00:08:45.417 Oversized SGL: Not Supported 00:08:45.417 SGL Metadata Address: Not Supported 00:08:45.417 SGL Offset: Not Supported 00:08:45.417 Transport SGL Data Block: Not Supported 00:08:45.417 Replay Protected Memory Block: Not Supported 00:08:45.417 00:08:45.417 Firmware Slot Information 00:08:45.417 ========================= 00:08:45.417 Active slot: 1 00:08:45.417 Slot 1 Firmware Revision: 1.0 00:08:45.417 00:08:45.417 00:08:45.417 Commands Supported and Effects 00:08:45.417 ============================== 00:08:45.417 Admin Commands 00:08:45.417 -------------- 00:08:45.417 Delete I/O Submission Queue (00h): Supported 00:08:45.417 Create I/O Submission Queue (01h): Supported 00:08:45.417 Get Log Page (02h): Supported 00:08:45.417 Delete I/O Completion Queue (04h): Supported 00:08:45.417 Create I/O Completion Queue (05h): Supported 00:08:45.417 Identify (06h): Supported 00:08:45.417 Abort (08h): Supported 00:08:45.417 Set Features (09h): Supported 00:08:45.417 Get Features (0Ah): Supported 00:08:45.417 Asynchronous Event Request (0Ch): Supported 00:08:45.417 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:45.417 Directive Send (19h): Supported 00:08:45.417 Directive Receive (1Ah): Supported 00:08:45.417 Virtualization Management (1Ch): Supported 00:08:45.417 Doorbell Buffer Config (7Ch): Supported 00:08:45.417 Format NVM (80h): Supported LBA-Change 00:08:45.417 I/O Commands 00:08:45.417 ------------ 00:08:45.417 Flush (00h): Supported LBA-Change 00:08:45.417 Write (01h): Supported LBA-Change 00:08:45.417 Read (02h): Supported 00:08:45.417 Compare (05h): Supported 00:08:45.417 Write Zeroes (08h): Supported LBA-Change 00:08:45.417 Dataset Management (09h): Supported LBA-Change 00:08:45.417 Unknown (0Ch): Supported 00:08:45.417 Unknown (12h): Supported 00:08:45.417 Copy (19h): Supported LBA-Change 00:08:45.417 Unknown (1Dh): Supported LBA-Change 00:08:45.417 00:08:45.417 Error Log 00:08:45.417 ========= 00:08:45.417 00:08:45.417 Arbitration 00:08:45.417 =========== 00:08:45.417 Arbitration Burst: no limit 00:08:45.417 00:08:45.417 Power Management 00:08:45.417 ================ 00:08:45.417 Number of Power States: 1 00:08:45.417 Current Power State: Power State #0 00:08:45.417 Power State #0: 00:08:45.417 Max Power: 25.00 W 00:08:45.417 Non-Operational State: Operational 00:08:45.417 Entry Latency: 16 microseconds 00:08:45.417 Exit Latency: 4 microseconds 00:08:45.417 Relative Read Throughput: 0 00:08:45.417 Relative Read Latency: 0 00:08:45.417 Relative Write Throughput: 0 00:08:45.417 Relative Write Latency: 0 00:08:45.417 Idle Power: Not Reported 00:08:45.417 Active Power: Not Reported 00:08:45.417 Non-Operational Permissive Mode: Not Supported 00:08:45.417 00:08:45.417 Health Information 00:08:45.417 ================== 00:08:45.417 Critical Warnings: 00:08:45.417 Available Spare Space: OK 00:08:45.417 Temperature: OK 00:08:45.417 Device Reliability: OK 00:08:45.417 Read Only: No 00:08:45.417 Volatile Memory Backup: OK 00:08:45.417 Current Temperature: 323 Kelvin (50 Celsius) 00:08:45.417 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:45.417 Available Spare: 0% 00:08:45.417 Available Spare Threshold: 0% 00:08:45.417 Life Percentage Used: 0% 00:08:45.417 Data Units Read: 827 00:08:45.417 Data Units Written: 756 00:08:45.417 Host Read Commands: 41099 00:08:45.417 Host Write Commands: 40522 00:08:45.417 Controller Busy Time: 0 minutes 00:08:45.417 Power Cycles: 0 00:08:45.417 Power On Hours: 0 hours 00:08:45.417 Unsafe Shutdowns: 0 00:08:45.417 Unrecoverable Media Errors: 0 00:08:45.417 Lifetime Error Log Entries: 0 00:08:45.417 Warning Temperature Time: 0 minutes 00:08:45.417 Critical Temperature Time: 0 minutes 00:08:45.417 00:08:45.417 Number of Queues 00:08:45.417 ================ 00:08:45.417 Number of I/O Submission Queues: 64 00:08:45.417 Number of I/O Completion Queues: 64 00:08:45.417 00:08:45.417 ZNS Specific Controller Data 00:08:45.417 ============================ 00:08:45.417 Zone Append Size Limit: 0 00:08:45.417 00:08:45.417 00:08:45.417 Active Namespaces 00:08:45.417 ================= 00:08:45.417 Namespace ID:1 00:08:45.417 Error Recovery Timeout: Unlimited 00:08:45.417 Command Set Identifier: NVM (00h) 00:08:45.417 Deallocate: Supported 00:08:45.417 Deallocated/Unwritten Error: Supported 00:08:45.417 Deallocated Read Value: All 0x00 00:08:45.417 Deallocate in Write Zeroes: Not Supported 00:08:45.417 Deallocated Guard Field: 0xFFFF 00:08:45.417 Flush: Supported 00:08:45.417 Reservation: Not Supported 00:08:45.417 Namespace Sharing Capabilities: Multiple Controllers 00:08:45.417 Size (in LBAs): 262144 (1GiB) 00:08:45.417 Capacity (in LBAs): 262144 (1GiB) 00:08:45.417 Utilization (in LBAs): 262144 (1GiB) 00:08:45.417 Thin Provisioning: Not Supported 00:08:45.417 Per-NS Atomic Units: No 00:08:45.417 Maximum Single Source Range Length: 128 00:08:45.417 Maximum Copy Length: 128 00:08:45.417 Maximum Source Range Count: 128 00:08:45.418 NGUID/EUI64 Never Reused: No 00:08:45.418 Namespace Write Protected: No 00:08:45.418 Endurance group ID: 1 00:08:45.418 Number of LBA Formats: 8 00:08:45.418 Current LBA Format: LBA Format #04 00:08:45.418 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:45.418 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:45.418 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:45.418 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:45.418 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:45.418 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:45.418 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:45.418 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:45.418 00:08:45.418 Get Feature FDP: 00:08:45.418 ================ 00:08:45.418 Enabled: Yes 00:08:45.418 FDP configuration index: 0 00:08:45.418 00:08:45.418 FDP configurations log page 00:08:45.418 =========================== 00:08:45.418 Number of FDP configurations: 1 00:08:45.418 Version: 0 00:08:45.418 Size: 112 00:08:45.418 FDP Configuration Descriptor: 0 00:08:45.418 Descriptor Size: 96 00:08:45.418 Reclaim Group Identifier format: 2 00:08:45.418 FDP Volatile Write Cache: Not Present 00:08:45.418 FDP Configuration: Valid 00:08:45.418 Vendor Specific Size: 0 00:08:45.418 Number of Reclaim Groups: 2 00:08:45.418 Number of Recalim Unit Handles: 8 00:08:45.418 Max Placement Identifiers: 128 00:08:45.418 Number of Namespaces Suppprted: 256 00:08:45.418 Reclaim unit Nominal Size: 6000000 bytes 00:08:45.418 Estimated Reclaim Unit Time Limit: Not Reported 00:08:45.418 RUH Desc #000: RUH Type: Initially Isolated 00:08:45.418 RUH Desc #001: RUH Type: Initially Isolated 00:08:45.418 RUH Desc #002: RUH Type: Initially Isolated 00:08:45.418 RUH Desc #003: RUH Type: Initially Isolated 00:08:45.418 RUH Desc #004: RUH Type: Initially Isolated 00:08:45.418 RUH Desc #005: RUH Type: Initially Isolated 00:08:45.418 RUH Desc #006: RUH Type: Initially Isolated 00:08:45.418 RUH Desc #007: RUH Type: Initially Isolated 00:08:45.418 00:08:45.418 FDP reclaim unit handle usage log page 00:08:45.418 ====================================== 00:08:45.418 Number of Reclaim Unit Handles: 8 00:08:45.418 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:45.418 RUH Usage Desc #001: RUH Attributes: Unused 00:08:45.418 RUH Usage Desc #002: RUH Attributes: Unused 00:08:45.418 RUH Usage Desc #003: RUH Attributes: Unused 00:08:45.418 RUH Usage Desc #004: RUH Attributes: Unused 00:08:45.418 RUH Usage Desc #005: RUH Attributes: Unused 00:08:45.418 RUH Usage Desc #006: RUH Attributes: Unused 00:08:45.418 RUH Usage Desc #007: RUH Attributes: Unused 00:08:45.418 00:08:45.418 FDP statistics log page 00:08:45.418 ======================= 00:08:45.418 Host bytes with metadata written: 414162944 00:08:45.418 Media bytes with metadata written: 414236672 00:08:45.418 Media bytes erased: 0 00:08:45.418 00:08:45.418 FDP events log page 00:08:45.418 =================== 00:08:45.418 Number of FDP events: 0 00:08:45.418 00:08:45.418 NVM Specific Namespace Data 00:08:45.418 =========================== 00:08:45.418 Logical Block Storage Tag Mask: 0 00:08:45.418 Protection Information Capabilities: 00:08:45.418 16b Guard Protection Information Storage Tag Support: No 00:08:45.418 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:45.418 Storage Tag Check Read Support: No 00:08:45.418 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.418 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.418 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.418 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.418 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.418 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.418 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.418 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:45.418 00:08:45.418 real 0m1.121s 00:08:45.418 user 0m0.416s 00:08:45.418 sys 0m0.516s 00:08:45.418 03:59:32 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.418 03:59:32 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:45.418 ************************************ 00:08:45.418 END TEST nvme_identify 00:08:45.418 ************************************ 00:08:45.418 03:59:32 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:45.418 03:59:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.418 03:59:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.418 03:59:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.690 ************************************ 00:08:45.690 START TEST nvme_perf 00:08:45.690 ************************************ 00:08:45.690 03:59:32 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:45.690 03:59:32 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:47.064 Initializing NVMe Controllers 00:08:47.064 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:47.064 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:47.064 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:47.064 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:47.064 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:47.064 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:47.064 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:47.064 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:47.064 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:47.064 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:47.064 Initialization complete. Launching workers. 00:08:47.064 ======================================================== 00:08:47.064 Latency(us) 00:08:47.064 Device Information : IOPS MiB/s Average min max 00:08:47.064 PCIE (0000:00:10.0) NSID 1 from core 0: 20021.15 234.62 6400.57 5477.62 32113.00 00:08:47.064 PCIE (0000:00:11.0) NSID 1 from core 0: 20021.15 234.62 6391.86 5396.24 30312.08 00:08:47.064 PCIE (0000:00:13.0) NSID 1 from core 0: 20021.15 234.62 6381.88 5561.57 28888.29 00:08:47.064 PCIE (0000:00:12.0) NSID 1 from core 0: 20021.15 234.62 6371.70 5598.87 27047.17 00:08:47.064 PCIE (0000:00:12.0) NSID 2 from core 0: 20021.15 234.62 6361.78 5620.50 25269.44 00:08:47.064 PCIE (0000:00:12.0) NSID 3 from core 0: 20085.11 235.37 6331.65 5556.38 20278.84 00:08:47.064 ======================================================== 00:08:47.064 Total : 120190.86 1408.49 6373.22 5396.24 32113.00 00:08:47.064 00:08:47.064 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:47.064 ================================================================================= 00:08:47.064 1.00000% : 5620.972us 00:08:47.064 10.00000% : 5772.209us 00:08:47.064 25.00000% : 5948.652us 00:08:47.064 50.00000% : 6225.920us 00:08:47.064 75.00000% : 6503.188us 00:08:47.064 90.00000% : 6704.837us 00:08:47.064 95.00000% : 6906.486us 00:08:47.064 98.00000% : 8217.206us 00:08:47.064 99.00000% : 9527.926us 00:08:47.064 99.50000% : 26819.348us 00:08:47.064 99.90000% : 31658.929us 00:08:47.064 99.99000% : 32062.228us 00:08:47.064 99.99900% : 32263.877us 00:08:47.064 99.99990% : 32263.877us 00:08:47.064 99.99999% : 32263.877us 00:08:47.064 00:08:47.064 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:47.064 ================================================================================= 00:08:47.064 1.00000% : 5696.591us 00:08:47.064 10.00000% : 5847.828us 00:08:47.065 25.00000% : 5973.858us 00:08:47.065 50.00000% : 6225.920us 00:08:47.065 75.00000% : 6452.775us 00:08:47.065 90.00000% : 6604.012us 00:08:47.065 95.00000% : 6856.074us 00:08:47.065 98.00000% : 8217.206us 00:08:47.065 99.00000% : 9729.575us 00:08:47.065 99.50000% : 25004.505us 00:08:47.065 99.90000% : 29844.086us 00:08:47.065 99.99000% : 30449.034us 00:08:47.065 99.99900% : 30449.034us 00:08:47.065 99.99990% : 30449.034us 00:08:47.065 99.99999% : 30449.034us 00:08:47.065 00:08:47.065 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:47.065 ================================================================================= 00:08:47.065 1.00000% : 5696.591us 00:08:47.065 10.00000% : 5847.828us 00:08:47.065 25.00000% : 5999.065us 00:08:47.065 50.00000% : 6200.714us 00:08:47.065 75.00000% : 6427.569us 00:08:47.065 90.00000% : 6604.012us 00:08:47.065 95.00000% : 6856.074us 00:08:47.065 98.00000% : 8116.382us 00:08:47.065 99.00000% : 9779.988us 00:08:47.065 99.50000% : 23693.785us 00:08:47.065 99.90000% : 28432.542us 00:08:47.065 99.99000% : 29037.489us 00:08:47.065 99.99900% : 29037.489us 00:08:47.065 99.99990% : 29037.489us 00:08:47.065 99.99999% : 29037.489us 00:08:47.065 00:08:47.065 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:47.065 ================================================================================= 00:08:47.065 1.00000% : 5696.591us 00:08:47.065 10.00000% : 5847.828us 00:08:47.065 25.00000% : 5999.065us 00:08:47.065 50.00000% : 6200.714us 00:08:47.065 75.00000% : 6427.569us 00:08:47.065 90.00000% : 6604.012us 00:08:47.065 95.00000% : 6856.074us 00:08:47.065 98.00000% : 8267.618us 00:08:47.065 99.00000% : 9931.225us 00:08:47.065 99.50000% : 21878.942us 00:08:47.065 99.90000% : 26617.698us 00:08:47.065 99.99000% : 27020.997us 00:08:47.065 99.99900% : 27222.646us 00:08:47.065 99.99990% : 27222.646us 00:08:47.065 99.99999% : 27222.646us 00:08:47.065 00:08:47.065 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:47.065 ================================================================================= 00:08:47.065 1.00000% : 5696.591us 00:08:47.065 10.00000% : 5847.828us 00:08:47.065 25.00000% : 5999.065us 00:08:47.065 50.00000% : 6200.714us 00:08:47.065 75.00000% : 6427.569us 00:08:47.065 90.00000% : 6604.012us 00:08:47.065 95.00000% : 6856.074us 00:08:47.065 98.00000% : 8418.855us 00:08:47.065 99.00000% : 9880.812us 00:08:47.065 99.50000% : 20064.098us 00:08:47.065 99.90000% : 24802.855us 00:08:47.065 99.99000% : 25306.978us 00:08:47.065 99.99900% : 25306.978us 00:08:47.065 99.99990% : 25306.978us 00:08:47.065 99.99999% : 25306.978us 00:08:47.065 00:08:47.065 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:47.065 ================================================================================= 00:08:47.065 1.00000% : 5696.591us 00:08:47.065 10.00000% : 5847.828us 00:08:47.065 25.00000% : 5999.065us 00:08:47.065 50.00000% : 6225.920us 00:08:47.065 75.00000% : 6452.775us 00:08:47.065 90.00000% : 6604.012us 00:08:47.065 95.00000% : 6906.486us 00:08:47.065 98.00000% : 8570.092us 00:08:47.065 99.00000% : 9729.575us 00:08:47.065 99.50000% : 14922.043us 00:08:47.065 99.90000% : 19862.449us 00:08:47.065 99.99000% : 20265.748us 00:08:47.065 99.99900% : 20366.572us 00:08:47.065 99.99990% : 20366.572us 00:08:47.065 99.99999% : 20366.572us 00:08:47.065 00:08:47.065 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:47.065 ============================================================================== 00:08:47.065 Range in us Cumulative IO count 00:08:47.065 5469.735 - 5494.942: 0.0200% ( 4) 00:08:47.065 5494.942 - 5520.148: 0.0300% ( 2) 00:08:47.065 5520.148 - 5545.354: 0.0899% ( 12) 00:08:47.065 5545.354 - 5570.560: 0.2596% ( 34) 00:08:47.065 5570.560 - 5595.766: 0.6240% ( 73) 00:08:47.065 5595.766 - 5620.972: 1.3129% ( 138) 00:08:47.065 5620.972 - 5646.178: 2.1116% ( 160) 00:08:47.065 5646.178 - 5671.385: 3.3197% ( 242) 00:08:47.065 5671.385 - 5696.591: 4.7973% ( 296) 00:08:47.065 5696.591 - 5721.797: 6.4547% ( 332) 00:08:47.065 5721.797 - 5747.003: 8.3566% ( 381) 00:08:47.065 5747.003 - 5772.209: 10.5232% ( 434) 00:08:47.065 5772.209 - 5797.415: 12.6398% ( 424) 00:08:47.065 5797.415 - 5822.622: 14.9561% ( 464) 00:08:47.065 5822.622 - 5847.828: 17.0078% ( 411) 00:08:47.065 5847.828 - 5873.034: 19.2242% ( 444) 00:08:47.065 5873.034 - 5898.240: 21.4557% ( 447) 00:08:47.065 5898.240 - 5923.446: 23.7869% ( 467) 00:08:47.065 5923.446 - 5948.652: 26.1432% ( 472) 00:08:47.065 5948.652 - 5973.858: 28.3197% ( 436) 00:08:47.065 5973.858 - 5999.065: 30.8656% ( 510) 00:08:47.065 5999.065 - 6024.271: 33.0172% ( 431) 00:08:47.065 6024.271 - 6049.477: 35.3984% ( 477) 00:08:47.065 6049.477 - 6074.683: 37.7995% ( 481) 00:08:47.065 6074.683 - 6099.889: 39.9810% ( 437) 00:08:47.065 6099.889 - 6125.095: 42.3922% ( 483) 00:08:47.065 6125.095 - 6150.302: 44.6685% ( 456) 00:08:47.065 6150.302 - 6175.508: 47.0697% ( 481) 00:08:47.065 6175.508 - 6200.714: 49.4209% ( 471) 00:08:47.065 6200.714 - 6225.920: 51.6823% ( 453) 00:08:47.065 6225.920 - 6251.126: 54.0735% ( 479) 00:08:47.065 6251.126 - 6276.332: 56.4397% ( 474) 00:08:47.065 6276.332 - 6301.538: 58.7859% ( 470) 00:08:47.065 6301.538 - 6326.745: 61.0373% ( 451) 00:08:47.065 6326.745 - 6351.951: 63.5433% ( 502) 00:08:47.065 6351.951 - 6377.157: 65.8197% ( 456) 00:08:47.065 6377.157 - 6402.363: 68.1010% ( 457) 00:08:47.065 6402.363 - 6427.569: 70.5072% ( 482) 00:08:47.065 6427.569 - 6452.775: 73.0681% ( 513) 00:08:47.065 6452.775 - 6503.188: 77.7456% ( 937) 00:08:47.065 6503.188 - 6553.600: 82.3333% ( 919) 00:08:47.065 6553.600 - 6604.012: 86.3718% ( 809) 00:08:47.065 6604.012 - 6654.425: 89.6266% ( 652) 00:08:47.065 6654.425 - 6704.837: 91.9828% ( 472) 00:08:47.065 6704.837 - 6755.249: 93.5753% ( 319) 00:08:47.065 6755.249 - 6805.662: 94.3341% ( 152) 00:08:47.065 6805.662 - 6856.074: 94.8482% ( 103) 00:08:47.065 6856.074 - 6906.486: 95.1977% ( 70) 00:08:47.065 6906.486 - 6956.898: 95.5022% ( 61) 00:08:47.065 6956.898 - 7007.311: 95.7718% ( 54) 00:08:47.065 7007.311 - 7057.723: 96.0014% ( 46) 00:08:47.065 7057.723 - 7108.135: 96.1562% ( 31) 00:08:47.065 7108.135 - 7158.548: 96.3708% ( 43) 00:08:47.065 7158.548 - 7208.960: 96.5206% ( 30) 00:08:47.065 7208.960 - 7259.372: 96.6753% ( 31) 00:08:47.065 7259.372 - 7309.785: 96.7851% ( 22) 00:08:47.065 7309.785 - 7360.197: 96.8950% ( 22) 00:08:47.065 7360.197 - 7410.609: 97.0048% ( 22) 00:08:47.066 7410.609 - 7461.022: 97.0797% ( 15) 00:08:47.066 7461.022 - 7511.434: 97.1546% ( 15) 00:08:47.066 7511.434 - 7561.846: 97.2294% ( 15) 00:08:47.066 7561.846 - 7612.258: 97.2943% ( 13) 00:08:47.066 7612.258 - 7662.671: 97.3442% ( 10) 00:08:47.066 7662.671 - 7713.083: 97.3992% ( 11) 00:08:47.066 7713.083 - 7763.495: 97.4790% ( 16) 00:08:47.066 7763.495 - 7813.908: 97.5689% ( 18) 00:08:47.066 7813.908 - 7864.320: 97.6338% ( 13) 00:08:47.066 7864.320 - 7914.732: 97.6987% ( 13) 00:08:47.066 7914.732 - 7965.145: 97.7736% ( 15) 00:08:47.066 7965.145 - 8015.557: 97.8385% ( 13) 00:08:47.066 8015.557 - 8065.969: 97.9083% ( 14) 00:08:47.066 8065.969 - 8116.382: 97.9533% ( 9) 00:08:47.066 8116.382 - 8166.794: 97.9932% ( 8) 00:08:47.066 8166.794 - 8217.206: 98.0282% ( 7) 00:08:47.066 8217.206 - 8267.618: 98.0631% ( 7) 00:08:47.066 8267.618 - 8318.031: 98.1030% ( 8) 00:08:47.066 8318.031 - 8368.443: 98.1480% ( 9) 00:08:47.066 8368.443 - 8418.855: 98.1729% ( 5) 00:08:47.066 8418.855 - 8469.268: 98.1929% ( 4) 00:08:47.066 8469.268 - 8519.680: 98.2228% ( 6) 00:08:47.066 8519.680 - 8570.092: 98.2478% ( 5) 00:08:47.066 8570.092 - 8620.505: 98.2728% ( 5) 00:08:47.066 8620.505 - 8670.917: 98.3077% ( 7) 00:08:47.066 8670.917 - 8721.329: 98.3576% ( 10) 00:08:47.066 8721.329 - 8771.742: 98.4026% ( 9) 00:08:47.066 8771.742 - 8822.154: 98.4375% ( 7) 00:08:47.066 8822.154 - 8872.566: 98.4774% ( 8) 00:08:47.066 8872.566 - 8922.978: 98.5124% ( 7) 00:08:47.066 8922.978 - 8973.391: 98.5573% ( 9) 00:08:47.066 8973.391 - 9023.803: 98.5923% ( 7) 00:08:47.066 9023.803 - 9074.215: 98.6372% ( 9) 00:08:47.066 9074.215 - 9124.628: 98.6671% ( 6) 00:08:47.066 9124.628 - 9175.040: 98.7121% ( 9) 00:08:47.066 9175.040 - 9225.452: 98.7420% ( 6) 00:08:47.066 9225.452 - 9275.865: 98.7919% ( 10) 00:08:47.066 9275.865 - 9326.277: 98.8219% ( 6) 00:08:47.066 9326.277 - 9376.689: 98.8668% ( 9) 00:08:47.066 9376.689 - 9427.102: 98.9117% ( 9) 00:08:47.066 9427.102 - 9477.514: 98.9517% ( 8) 00:08:47.066 9477.514 - 9527.926: 99.0016% ( 10) 00:08:47.066 9527.926 - 9578.338: 99.0315% ( 6) 00:08:47.066 9578.338 - 9628.751: 99.0765% ( 9) 00:08:47.066 9628.751 - 9679.163: 99.1164% ( 8) 00:08:47.066 9679.163 - 9729.575: 99.1414% ( 5) 00:08:47.066 9729.575 - 9779.988: 99.1613% ( 4) 00:08:47.066 9779.988 - 9830.400: 99.1713% ( 2) 00:08:47.066 9830.400 - 9880.812: 99.1863% ( 3) 00:08:47.066 9880.812 - 9931.225: 99.1963% ( 2) 00:08:47.066 9931.225 - 9981.637: 99.2113% ( 3) 00:08:47.066 9981.637 - 10032.049: 99.2262% ( 3) 00:08:47.066 10032.049 - 10082.462: 99.2412% ( 3) 00:08:47.066 10082.462 - 10132.874: 99.2562% ( 3) 00:08:47.066 10132.874 - 10183.286: 99.2712% ( 3) 00:08:47.066 10183.286 - 10233.698: 99.2812% ( 2) 00:08:47.066 10233.698 - 10284.111: 99.2961% ( 3) 00:08:47.066 10284.111 - 10334.523: 99.3111% ( 3) 00:08:47.066 10334.523 - 10384.935: 99.3261% ( 3) 00:08:47.066 10384.935 - 10435.348: 99.3411% ( 3) 00:08:47.066 10435.348 - 10485.760: 99.3560% ( 3) 00:08:47.066 10485.760 - 10536.172: 99.3610% ( 1) 00:08:47.066 25811.102 - 26012.751: 99.3710% ( 2) 00:08:47.066 26012.751 - 26214.400: 99.4109% ( 8) 00:08:47.066 26214.400 - 26416.049: 99.4459% ( 7) 00:08:47.066 26416.049 - 26617.698: 99.4908% ( 9) 00:08:47.066 26617.698 - 26819.348: 99.5258% ( 7) 00:08:47.066 26819.348 - 27020.997: 99.5657% ( 8) 00:08:47.066 27020.997 - 27222.646: 99.6006% ( 7) 00:08:47.066 27222.646 - 27424.295: 99.6456% ( 9) 00:08:47.066 27424.295 - 27625.945: 99.6805% ( 7) 00:08:47.066 30449.034 - 30650.683: 99.7155% ( 7) 00:08:47.066 30650.683 - 30852.332: 99.7554% ( 8) 00:08:47.066 30852.332 - 31053.982: 99.7903% ( 7) 00:08:47.066 31053.982 - 31255.631: 99.8353% ( 9) 00:08:47.066 31255.631 - 31457.280: 99.8702% ( 7) 00:08:47.066 31457.280 - 31658.929: 99.9101% ( 8) 00:08:47.066 31658.929 - 31860.578: 99.9501% ( 8) 00:08:47.066 31860.578 - 32062.228: 99.9950% ( 9) 00:08:47.066 32062.228 - 32263.877: 100.0000% ( 1) 00:08:47.066 00:08:47.066 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:47.066 ============================================================================== 00:08:47.066 Range in us Cumulative IO count 00:08:47.066 5394.117 - 5419.323: 0.0200% ( 4) 00:08:47.066 5419.323 - 5444.529: 0.0349% ( 3) 00:08:47.066 5444.529 - 5469.735: 0.0399% ( 1) 00:08:47.066 5469.735 - 5494.942: 0.0449% ( 1) 00:08:47.066 5494.942 - 5520.148: 0.0549% ( 2) 00:08:47.066 5520.148 - 5545.354: 0.0599% ( 1) 00:08:47.066 5545.354 - 5570.560: 0.0948% ( 7) 00:08:47.066 5570.560 - 5595.766: 0.1198% ( 5) 00:08:47.066 5595.766 - 5620.972: 0.1947% ( 15) 00:08:47.066 5620.972 - 5646.178: 0.3544% ( 32) 00:08:47.066 5646.178 - 5671.385: 0.7588% ( 81) 00:08:47.066 5671.385 - 5696.591: 1.4177% ( 132) 00:08:47.066 5696.591 - 5721.797: 2.3413% ( 185) 00:08:47.066 5721.797 - 5747.003: 3.6841% ( 269) 00:08:47.066 5747.003 - 5772.209: 5.3065% ( 325) 00:08:47.066 5772.209 - 5797.415: 7.1136% ( 362) 00:08:47.066 5797.415 - 5822.622: 9.3301% ( 444) 00:08:47.066 5822.622 - 5847.828: 11.7911% ( 493) 00:08:47.066 5847.828 - 5873.034: 14.2272% ( 488) 00:08:47.066 5873.034 - 5898.240: 16.8980% ( 535) 00:08:47.066 5898.240 - 5923.446: 19.6436% ( 550) 00:08:47.066 5923.446 - 5948.652: 22.3193% ( 536) 00:08:47.066 5948.652 - 5973.858: 25.1198% ( 561) 00:08:47.066 5973.858 - 5999.065: 27.7756% ( 532) 00:08:47.066 5999.065 - 6024.271: 30.5411% ( 554) 00:08:47.066 6024.271 - 6049.477: 33.3017% ( 553) 00:08:47.066 6049.477 - 6074.683: 36.0623% ( 553) 00:08:47.066 6074.683 - 6099.889: 38.7979% ( 548) 00:08:47.066 6099.889 - 6125.095: 41.6184% ( 565) 00:08:47.066 6125.095 - 6150.302: 44.4339% ( 564) 00:08:47.066 6150.302 - 6175.508: 47.1196% ( 538) 00:08:47.066 6175.508 - 6200.714: 49.8153% ( 540) 00:08:47.066 6200.714 - 6225.920: 52.5859% ( 555) 00:08:47.066 6225.920 - 6251.126: 55.3065% ( 545) 00:08:47.066 6251.126 - 6276.332: 58.0621% ( 552) 00:08:47.066 6276.332 - 6301.538: 60.8127% ( 551) 00:08:47.066 6301.538 - 6326.745: 63.6432% ( 567) 00:08:47.066 6326.745 - 6351.951: 66.4137% ( 555) 00:08:47.066 6351.951 - 6377.157: 69.1394% ( 546) 00:08:47.066 6377.157 - 6402.363: 71.9499% ( 563) 00:08:47.066 6402.363 - 6427.569: 74.7155% ( 554) 00:08:47.066 6427.569 - 6452.775: 77.4710% ( 552) 00:08:47.066 6452.775 - 6503.188: 82.6278% ( 1033) 00:08:47.066 6503.188 - 6553.600: 86.9559% ( 867) 00:08:47.067 6553.600 - 6604.012: 90.2057% ( 651) 00:08:47.067 6604.012 - 6654.425: 92.4221% ( 444) 00:08:47.067 6654.425 - 6704.837: 93.6352% ( 243) 00:08:47.067 6704.837 - 6755.249: 94.2592% ( 125) 00:08:47.067 6755.249 - 6805.662: 94.6935% ( 87) 00:08:47.067 6805.662 - 6856.074: 95.0329% ( 68) 00:08:47.067 6856.074 - 6906.486: 95.2925% ( 52) 00:08:47.067 6906.486 - 6956.898: 95.5421% ( 50) 00:08:47.067 6956.898 - 7007.311: 95.7418% ( 40) 00:08:47.067 7007.311 - 7057.723: 95.9315% ( 38) 00:08:47.067 7057.723 - 7108.135: 96.1512% ( 44) 00:08:47.067 7108.135 - 7158.548: 96.3359% ( 37) 00:08:47.067 7158.548 - 7208.960: 96.5256% ( 38) 00:08:47.067 7208.960 - 7259.372: 96.6304% ( 21) 00:08:47.067 7259.372 - 7309.785: 96.7452% ( 23) 00:08:47.067 7309.785 - 7360.197: 96.8450% ( 20) 00:08:47.067 7360.197 - 7410.609: 96.9249% ( 16) 00:08:47.067 7410.609 - 7461.022: 96.9948% ( 14) 00:08:47.067 7461.022 - 7511.434: 97.0597% ( 13) 00:08:47.067 7511.434 - 7561.846: 97.1396% ( 16) 00:08:47.067 7561.846 - 7612.258: 97.2145% ( 15) 00:08:47.067 7612.258 - 7662.671: 97.2893% ( 15) 00:08:47.067 7662.671 - 7713.083: 97.3642% ( 15) 00:08:47.067 7713.083 - 7763.495: 97.4391% ( 15) 00:08:47.067 7763.495 - 7813.908: 97.5040% ( 13) 00:08:47.067 7813.908 - 7864.320: 97.5789% ( 15) 00:08:47.067 7864.320 - 7914.732: 97.6637% ( 17) 00:08:47.067 7914.732 - 7965.145: 97.7336% ( 14) 00:08:47.067 7965.145 - 8015.557: 97.7935% ( 12) 00:08:47.067 8015.557 - 8065.969: 97.8584% ( 13) 00:08:47.067 8065.969 - 8116.382: 97.9183% ( 12) 00:08:47.067 8116.382 - 8166.794: 97.9782% ( 12) 00:08:47.067 8166.794 - 8217.206: 98.0381% ( 12) 00:08:47.067 8217.206 - 8267.618: 98.0980% ( 12) 00:08:47.067 8267.618 - 8318.031: 98.1330% ( 7) 00:08:47.067 8318.031 - 8368.443: 98.1629% ( 6) 00:08:47.067 8368.443 - 8418.855: 98.1879% ( 5) 00:08:47.067 8418.855 - 8469.268: 98.2278% ( 8) 00:08:47.067 8469.268 - 8519.680: 98.2678% ( 8) 00:08:47.067 8519.680 - 8570.092: 98.3077% ( 8) 00:08:47.067 8570.092 - 8620.505: 98.3427% ( 7) 00:08:47.067 8620.505 - 8670.917: 98.3926% ( 10) 00:08:47.067 8670.917 - 8721.329: 98.4225% ( 6) 00:08:47.067 8721.329 - 8771.742: 98.4525% ( 6) 00:08:47.067 8771.742 - 8822.154: 98.4675% ( 3) 00:08:47.067 8822.154 - 8872.566: 98.4924% ( 5) 00:08:47.067 8872.566 - 8922.978: 98.5274% ( 7) 00:08:47.067 8922.978 - 8973.391: 98.5573% ( 6) 00:08:47.067 8973.391 - 9023.803: 98.5773% ( 4) 00:08:47.067 9023.803 - 9074.215: 98.6072% ( 6) 00:08:47.067 9074.215 - 9124.628: 98.6322% ( 5) 00:08:47.067 9124.628 - 9175.040: 98.6571% ( 5) 00:08:47.067 9175.040 - 9225.452: 98.6871% ( 6) 00:08:47.067 9225.452 - 9275.865: 98.7171% ( 6) 00:08:47.067 9275.865 - 9326.277: 98.7420% ( 5) 00:08:47.067 9326.277 - 9376.689: 98.7670% ( 5) 00:08:47.067 9376.689 - 9427.102: 98.7969% ( 6) 00:08:47.067 9427.102 - 9477.514: 98.8219% ( 5) 00:08:47.067 9477.514 - 9527.926: 98.8568% ( 7) 00:08:47.067 9527.926 - 9578.338: 98.8818% ( 5) 00:08:47.067 9578.338 - 9628.751: 98.9167% ( 7) 00:08:47.067 9628.751 - 9679.163: 98.9766% ( 12) 00:08:47.067 9679.163 - 9729.575: 99.0066% ( 6) 00:08:47.067 9729.575 - 9779.988: 99.0415% ( 7) 00:08:47.067 9779.988 - 9830.400: 99.0765% ( 7) 00:08:47.067 9830.400 - 9880.812: 99.1014% ( 5) 00:08:47.067 9880.812 - 9931.225: 99.1314% ( 6) 00:08:47.067 9931.225 - 9981.637: 99.1563% ( 5) 00:08:47.067 9981.637 - 10032.049: 99.1863% ( 6) 00:08:47.067 10032.049 - 10082.462: 99.2113% ( 5) 00:08:47.067 10082.462 - 10132.874: 99.2262% ( 3) 00:08:47.067 10132.874 - 10183.286: 99.2412% ( 3) 00:08:47.067 10183.286 - 10233.698: 99.2612% ( 4) 00:08:47.067 10233.698 - 10284.111: 99.2762% ( 3) 00:08:47.067 10284.111 - 10334.523: 99.2911% ( 3) 00:08:47.067 10334.523 - 10384.935: 99.3061% ( 3) 00:08:47.067 10384.935 - 10435.348: 99.3211% ( 3) 00:08:47.067 10435.348 - 10485.760: 99.3411% ( 4) 00:08:47.067 10485.760 - 10536.172: 99.3560% ( 3) 00:08:47.067 10536.172 - 10586.585: 99.3610% ( 1) 00:08:47.067 24298.732 - 24399.557: 99.3760% ( 3) 00:08:47.067 24399.557 - 24500.382: 99.3960% ( 4) 00:08:47.067 24500.382 - 24601.206: 99.4159% ( 4) 00:08:47.067 24601.206 - 24702.031: 99.4359% ( 4) 00:08:47.067 24702.031 - 24802.855: 99.4609% ( 5) 00:08:47.067 24802.855 - 24903.680: 99.4808% ( 4) 00:08:47.067 24903.680 - 25004.505: 99.5008% ( 4) 00:08:47.067 25004.505 - 25105.329: 99.5208% ( 4) 00:08:47.067 25105.329 - 25206.154: 99.5457% ( 5) 00:08:47.067 25206.154 - 25306.978: 99.5657% ( 4) 00:08:47.067 25306.978 - 25407.803: 99.5857% ( 4) 00:08:47.067 25407.803 - 25508.628: 99.6106% ( 5) 00:08:47.067 25508.628 - 25609.452: 99.6256% ( 3) 00:08:47.067 25609.452 - 25710.277: 99.6506% ( 5) 00:08:47.067 25710.277 - 25811.102: 99.6705% ( 4) 00:08:47.067 25811.102 - 26012.751: 99.6805% ( 2) 00:08:47.067 28634.191 - 28835.840: 99.6855% ( 1) 00:08:47.067 28835.840 - 29037.489: 99.7304% ( 9) 00:08:47.067 29037.489 - 29239.138: 99.7704% ( 8) 00:08:47.067 29239.138 - 29440.788: 99.8103% ( 8) 00:08:47.067 29440.788 - 29642.437: 99.8552% ( 9) 00:08:47.067 29642.437 - 29844.086: 99.9002% ( 9) 00:08:47.067 29844.086 - 30045.735: 99.9401% ( 8) 00:08:47.067 30045.735 - 30247.385: 99.9850% ( 9) 00:08:47.067 30247.385 - 30449.034: 100.0000% ( 3) 00:08:47.067 00:08:47.067 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:47.067 ============================================================================== 00:08:47.067 Range in us Cumulative IO count 00:08:47.067 5545.354 - 5570.560: 0.0050% ( 1) 00:08:47.067 5570.560 - 5595.766: 0.0200% ( 3) 00:08:47.067 5595.766 - 5620.972: 0.0449% ( 5) 00:08:47.067 5620.972 - 5646.178: 0.2246% ( 36) 00:08:47.067 5646.178 - 5671.385: 0.5641% ( 68) 00:08:47.067 5671.385 - 5696.591: 1.1532% ( 118) 00:08:47.067 5696.591 - 5721.797: 2.2214% ( 214) 00:08:47.067 5721.797 - 5747.003: 3.6841% ( 293) 00:08:47.067 5747.003 - 5772.209: 5.2616% ( 316) 00:08:47.067 5772.209 - 5797.415: 7.1486% ( 378) 00:08:47.067 5797.415 - 5822.622: 9.3201% ( 435) 00:08:47.067 5822.622 - 5847.828: 11.5915% ( 455) 00:08:47.067 5847.828 - 5873.034: 14.1374% ( 510) 00:08:47.067 5873.034 - 5898.240: 16.7782% ( 529) 00:08:47.067 5898.240 - 5923.446: 19.3840% ( 522) 00:08:47.067 5923.446 - 5948.652: 22.1446% ( 553) 00:08:47.067 5948.652 - 5973.858: 24.9201% ( 556) 00:08:47.067 5973.858 - 5999.065: 27.6408% ( 545) 00:08:47.067 5999.065 - 6024.271: 30.4613% ( 565) 00:08:47.067 6024.271 - 6049.477: 33.2568% ( 560) 00:08:47.067 6049.477 - 6074.683: 36.1372% ( 577) 00:08:47.067 6074.683 - 6099.889: 38.9726% ( 568) 00:08:47.067 6099.889 - 6125.095: 41.8530% ( 577) 00:08:47.067 6125.095 - 6150.302: 44.6436% ( 559) 00:08:47.067 6150.302 - 6175.508: 47.4441% ( 561) 00:08:47.067 6175.508 - 6200.714: 50.2147% ( 555) 00:08:47.067 6200.714 - 6225.920: 52.9852% ( 555) 00:08:47.067 6225.920 - 6251.126: 55.7358% ( 551) 00:08:47.067 6251.126 - 6276.332: 58.4515% ( 544) 00:08:47.068 6276.332 - 6301.538: 61.2420% ( 559) 00:08:47.068 6301.538 - 6326.745: 63.9926% ( 551) 00:08:47.068 6326.745 - 6351.951: 66.7183% ( 546) 00:08:47.068 6351.951 - 6377.157: 69.5387% ( 565) 00:08:47.068 6377.157 - 6402.363: 72.3293% ( 559) 00:08:47.068 6402.363 - 6427.569: 75.0899% ( 553) 00:08:47.068 6427.569 - 6452.775: 77.7955% ( 542) 00:08:47.068 6452.775 - 6503.188: 82.9872% ( 1040) 00:08:47.068 6503.188 - 6553.600: 87.3652% ( 877) 00:08:47.068 6553.600 - 6604.012: 90.4553% ( 619) 00:08:47.068 6604.012 - 6654.425: 92.5519% ( 420) 00:08:47.068 6654.425 - 6704.837: 93.7700% ( 244) 00:08:47.068 6704.837 - 6755.249: 94.4189% ( 130) 00:08:47.068 6755.249 - 6805.662: 94.8083% ( 78) 00:08:47.068 6805.662 - 6856.074: 95.0929% ( 57) 00:08:47.068 6856.074 - 6906.486: 95.3025% ( 42) 00:08:47.068 6906.486 - 6956.898: 95.5172% ( 43) 00:08:47.068 6956.898 - 7007.311: 95.7169% ( 40) 00:08:47.068 7007.311 - 7057.723: 95.9065% ( 38) 00:08:47.068 7057.723 - 7108.135: 96.1212% ( 43) 00:08:47.068 7108.135 - 7158.548: 96.3409% ( 44) 00:08:47.068 7158.548 - 7208.960: 96.5256% ( 37) 00:08:47.068 7208.960 - 7259.372: 96.6653% ( 28) 00:08:47.068 7259.372 - 7309.785: 96.8201% ( 31) 00:08:47.068 7309.785 - 7360.197: 96.9349% ( 23) 00:08:47.068 7360.197 - 7410.609: 97.0248% ( 18) 00:08:47.068 7410.609 - 7461.022: 97.1046% ( 16) 00:08:47.068 7461.022 - 7511.434: 97.1945% ( 18) 00:08:47.068 7511.434 - 7561.846: 97.2644% ( 14) 00:08:47.068 7561.846 - 7612.258: 97.3492% ( 17) 00:08:47.068 7612.258 - 7662.671: 97.4291% ( 16) 00:08:47.068 7662.671 - 7713.083: 97.4990% ( 14) 00:08:47.068 7713.083 - 7763.495: 97.5739% ( 15) 00:08:47.068 7763.495 - 7813.908: 97.6538% ( 16) 00:08:47.068 7813.908 - 7864.320: 97.7286% ( 15) 00:08:47.068 7864.320 - 7914.732: 97.7786% ( 10) 00:08:47.068 7914.732 - 7965.145: 97.8335% ( 11) 00:08:47.068 7965.145 - 8015.557: 97.8934% ( 12) 00:08:47.068 8015.557 - 8065.969: 97.9483% ( 11) 00:08:47.068 8065.969 - 8116.382: 98.0132% ( 13) 00:08:47.068 8116.382 - 8166.794: 98.0731% ( 12) 00:08:47.068 8166.794 - 8217.206: 98.1180% ( 9) 00:08:47.068 8217.206 - 8267.618: 98.1530% ( 7) 00:08:47.068 8267.618 - 8318.031: 98.1879% ( 7) 00:08:47.068 8318.031 - 8368.443: 98.2129% ( 5) 00:08:47.068 8368.443 - 8418.855: 98.2328% ( 4) 00:08:47.068 8418.855 - 8469.268: 98.2528% ( 4) 00:08:47.068 8469.268 - 8519.680: 98.2728% ( 4) 00:08:47.068 8519.680 - 8570.092: 98.2827% ( 2) 00:08:47.068 8570.092 - 8620.505: 98.2927% ( 2) 00:08:47.068 8620.505 - 8670.917: 98.3027% ( 2) 00:08:47.068 8670.917 - 8721.329: 98.3127% ( 2) 00:08:47.068 8721.329 - 8771.742: 98.3227% ( 2) 00:08:47.068 8771.742 - 8822.154: 98.3327% ( 2) 00:08:47.068 8822.154 - 8872.566: 98.3427% ( 2) 00:08:47.068 8872.566 - 8922.978: 98.3676% ( 5) 00:08:47.068 8922.978 - 8973.391: 98.4125% ( 9) 00:08:47.068 8973.391 - 9023.803: 98.4475% ( 7) 00:08:47.068 9023.803 - 9074.215: 98.4974% ( 10) 00:08:47.068 9074.215 - 9124.628: 98.5423% ( 9) 00:08:47.068 9124.628 - 9175.040: 98.5972% ( 11) 00:08:47.068 9175.040 - 9225.452: 98.6322% ( 7) 00:08:47.068 9225.452 - 9275.865: 98.6621% ( 6) 00:08:47.068 9275.865 - 9326.277: 98.6971% ( 7) 00:08:47.068 9326.277 - 9376.689: 98.7270% ( 6) 00:08:47.068 9376.689 - 9427.102: 98.7620% ( 7) 00:08:47.068 9427.102 - 9477.514: 98.7919% ( 6) 00:08:47.068 9477.514 - 9527.926: 98.8319% ( 8) 00:08:47.068 9527.926 - 9578.338: 98.8568% ( 5) 00:08:47.068 9578.338 - 9628.751: 98.8968% ( 8) 00:08:47.068 9628.751 - 9679.163: 98.9367% ( 8) 00:08:47.068 9679.163 - 9729.575: 98.9816% ( 9) 00:08:47.068 9729.575 - 9779.988: 99.0266% ( 9) 00:08:47.068 9779.988 - 9830.400: 99.0665% ( 8) 00:08:47.068 9830.400 - 9880.812: 99.0915% ( 5) 00:08:47.068 9880.812 - 9931.225: 99.1064% ( 3) 00:08:47.068 9931.225 - 9981.637: 99.1164% ( 2) 00:08:47.068 9981.637 - 10032.049: 99.1264% ( 2) 00:08:47.068 10032.049 - 10082.462: 99.1314% ( 1) 00:08:47.068 10082.462 - 10132.874: 99.1414% ( 2) 00:08:47.068 10132.874 - 10183.286: 99.1514% ( 2) 00:08:47.068 10183.286 - 10233.698: 99.1713% ( 4) 00:08:47.068 10233.698 - 10284.111: 99.1863% ( 3) 00:08:47.068 10284.111 - 10334.523: 99.2063% ( 4) 00:08:47.068 10334.523 - 10384.935: 99.2212% ( 3) 00:08:47.068 10384.935 - 10435.348: 99.2362% ( 3) 00:08:47.068 10435.348 - 10485.760: 99.2462% ( 2) 00:08:47.068 10485.760 - 10536.172: 99.2662% ( 4) 00:08:47.068 10536.172 - 10586.585: 99.2812% ( 3) 00:08:47.068 10586.585 - 10636.997: 99.2961% ( 3) 00:08:47.068 10636.997 - 10687.409: 99.3161% ( 4) 00:08:47.068 10687.409 - 10737.822: 99.3311% ( 3) 00:08:47.068 10737.822 - 10788.234: 99.3460% ( 3) 00:08:47.068 10788.234 - 10838.646: 99.3610% ( 3) 00:08:47.068 22887.188 - 22988.012: 99.3710% ( 2) 00:08:47.068 22988.012 - 23088.837: 99.3910% ( 4) 00:08:47.068 23088.837 - 23189.662: 99.4109% ( 4) 00:08:47.068 23189.662 - 23290.486: 99.4309% ( 4) 00:08:47.068 23290.486 - 23391.311: 99.4509% ( 4) 00:08:47.068 23391.311 - 23492.135: 99.4708% ( 4) 00:08:47.068 23492.135 - 23592.960: 99.4908% ( 4) 00:08:47.068 23592.960 - 23693.785: 99.5158% ( 5) 00:08:47.068 23693.785 - 23794.609: 99.5357% ( 4) 00:08:47.068 23794.609 - 23895.434: 99.5557% ( 4) 00:08:47.068 23895.434 - 23996.258: 99.5707% ( 3) 00:08:47.068 23996.258 - 24097.083: 99.5857% ( 3) 00:08:47.068 24097.083 - 24197.908: 99.6056% ( 4) 00:08:47.068 24197.908 - 24298.732: 99.6306% ( 5) 00:08:47.068 24298.732 - 24399.557: 99.6506% ( 4) 00:08:47.068 24399.557 - 24500.382: 99.6705% ( 4) 00:08:47.068 24500.382 - 24601.206: 99.6805% ( 2) 00:08:47.068 27222.646 - 27424.295: 99.7005% ( 4) 00:08:47.068 27424.295 - 27625.945: 99.7404% ( 8) 00:08:47.068 27625.945 - 27827.594: 99.7804% ( 8) 00:08:47.068 27827.594 - 28029.243: 99.8203% ( 8) 00:08:47.068 28029.243 - 28230.892: 99.8652% ( 9) 00:08:47.068 28230.892 - 28432.542: 99.9052% ( 8) 00:08:47.068 28432.542 - 28634.191: 99.9451% ( 8) 00:08:47.068 28634.191 - 28835.840: 99.9850% ( 8) 00:08:47.068 28835.840 - 29037.489: 100.0000% ( 3) 00:08:47.068 00:08:47.068 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:47.068 ============================================================================== 00:08:47.068 Range in us Cumulative IO count 00:08:47.068 5595.766 - 5620.972: 0.0649% ( 13) 00:08:47.068 5620.972 - 5646.178: 0.2196% ( 31) 00:08:47.068 5646.178 - 5671.385: 0.5591% ( 68) 00:08:47.068 5671.385 - 5696.591: 1.2031% ( 129) 00:08:47.068 5696.591 - 5721.797: 2.2414% ( 208) 00:08:47.068 5721.797 - 5747.003: 3.5593% ( 264) 00:08:47.068 5747.003 - 5772.209: 5.1168% ( 312) 00:08:47.068 5772.209 - 5797.415: 6.8640% ( 350) 00:08:47.068 5797.415 - 5822.622: 9.2202% ( 472) 00:08:47.068 5822.622 - 5847.828: 11.6763% ( 492) 00:08:47.068 5847.828 - 5873.034: 14.1374% ( 493) 00:08:47.068 5873.034 - 5898.240: 16.7682% ( 527) 00:08:47.068 5898.240 - 5923.446: 19.4589% ( 539) 00:08:47.068 5923.446 - 5948.652: 22.0847% ( 526) 00:08:47.068 5948.652 - 5973.858: 24.8802% ( 560) 00:08:47.068 5973.858 - 5999.065: 27.6757% ( 560) 00:08:47.068 5999.065 - 6024.271: 30.4663% ( 559) 00:08:47.068 6024.271 - 6049.477: 33.2768% ( 563) 00:08:47.068 6049.477 - 6074.683: 36.1172% ( 569) 00:08:47.068 6074.683 - 6099.889: 38.9177% ( 561) 00:08:47.069 6099.889 - 6125.095: 41.7432% ( 566) 00:08:47.069 6125.095 - 6150.302: 44.5637% ( 565) 00:08:47.069 6150.302 - 6175.508: 47.3842% ( 565) 00:08:47.069 6175.508 - 6200.714: 50.1148% ( 547) 00:08:47.069 6200.714 - 6225.920: 52.9054% ( 559) 00:08:47.069 6225.920 - 6251.126: 55.6560% ( 551) 00:08:47.069 6251.126 - 6276.332: 58.3816% ( 546) 00:08:47.069 6276.332 - 6301.538: 61.1472% ( 554) 00:08:47.069 6301.538 - 6326.745: 63.9277% ( 557) 00:08:47.069 6326.745 - 6351.951: 66.7482% ( 565) 00:08:47.069 6351.951 - 6377.157: 69.5387% ( 559) 00:08:47.069 6377.157 - 6402.363: 72.3892% ( 571) 00:08:47.069 6402.363 - 6427.569: 75.1747% ( 558) 00:08:47.069 6427.569 - 6452.775: 77.9453% ( 555) 00:08:47.069 6452.775 - 6503.188: 83.1470% ( 1042) 00:08:47.069 6503.188 - 6553.600: 87.4002% ( 852) 00:08:47.069 6553.600 - 6604.012: 90.5351% ( 628) 00:08:47.069 6604.012 - 6654.425: 92.6567% ( 425) 00:08:47.069 6654.425 - 6704.837: 93.7600% ( 221) 00:08:47.069 6704.837 - 6755.249: 94.4289% ( 134) 00:08:47.069 6755.249 - 6805.662: 94.8732% ( 89) 00:08:47.069 6805.662 - 6856.074: 95.2177% ( 69) 00:08:47.069 6856.074 - 6906.486: 95.4972% ( 56) 00:08:47.069 6906.486 - 6956.898: 95.6969% ( 40) 00:08:47.069 6956.898 - 7007.311: 95.8816% ( 37) 00:08:47.069 7007.311 - 7057.723: 96.0663% ( 37) 00:08:47.069 7057.723 - 7108.135: 96.2460% ( 36) 00:08:47.069 7108.135 - 7158.548: 96.4407% ( 39) 00:08:47.069 7158.548 - 7208.960: 96.6154% ( 35) 00:08:47.069 7208.960 - 7259.372: 96.7702% ( 31) 00:08:47.069 7259.372 - 7309.785: 96.8950% ( 25) 00:08:47.069 7309.785 - 7360.197: 97.0148% ( 24) 00:08:47.069 7360.197 - 7410.609: 97.1046% ( 18) 00:08:47.069 7410.609 - 7461.022: 97.1895% ( 17) 00:08:47.069 7461.022 - 7511.434: 97.2843% ( 19) 00:08:47.069 7511.434 - 7561.846: 97.3692% ( 17) 00:08:47.069 7561.846 - 7612.258: 97.4441% ( 15) 00:08:47.069 7612.258 - 7662.671: 97.5240% ( 16) 00:08:47.069 7662.671 - 7713.083: 97.5988% ( 15) 00:08:47.069 7713.083 - 7763.495: 97.6737% ( 15) 00:08:47.069 7763.495 - 7813.908: 97.7236% ( 10) 00:08:47.069 7813.908 - 7864.320: 97.7636% ( 8) 00:08:47.069 7864.320 - 7914.732: 97.8085% ( 9) 00:08:47.069 7914.732 - 7965.145: 97.8385% ( 6) 00:08:47.069 7965.145 - 8015.557: 97.8684% ( 6) 00:08:47.069 8015.557 - 8065.969: 97.8934% ( 5) 00:08:47.069 8065.969 - 8116.382: 97.9233% ( 6) 00:08:47.069 8116.382 - 8166.794: 97.9483% ( 5) 00:08:47.069 8166.794 - 8217.206: 97.9782% ( 6) 00:08:47.069 8217.206 - 8267.618: 98.0082% ( 6) 00:08:47.069 8267.618 - 8318.031: 98.0381% ( 6) 00:08:47.069 8318.031 - 8368.443: 98.0681% ( 6) 00:08:47.069 8368.443 - 8418.855: 98.1080% ( 8) 00:08:47.069 8418.855 - 8469.268: 98.1330% ( 5) 00:08:47.069 8469.268 - 8519.680: 98.1480% ( 3) 00:08:47.069 8519.680 - 8570.092: 98.1629% ( 3) 00:08:47.069 8570.092 - 8620.505: 98.1829% ( 4) 00:08:47.069 8620.505 - 8670.917: 98.1979% ( 3) 00:08:47.069 8670.917 - 8721.329: 98.2129% ( 3) 00:08:47.069 8721.329 - 8771.742: 98.2328% ( 4) 00:08:47.069 8771.742 - 8822.154: 98.2478% ( 3) 00:08:47.069 8822.154 - 8872.566: 98.2628% ( 3) 00:08:47.069 8872.566 - 8922.978: 98.2778% ( 3) 00:08:47.069 8922.978 - 8973.391: 98.2977% ( 4) 00:08:47.069 8973.391 - 9023.803: 98.3127% ( 3) 00:08:47.069 9023.803 - 9074.215: 98.3676% ( 11) 00:08:47.069 9074.215 - 9124.628: 98.4175% ( 10) 00:08:47.069 9124.628 - 9175.040: 98.4625% ( 9) 00:08:47.069 9175.040 - 9225.452: 98.5074% ( 9) 00:08:47.069 9225.452 - 9275.865: 98.5523% ( 9) 00:08:47.069 9275.865 - 9326.277: 98.5972% ( 9) 00:08:47.069 9326.277 - 9376.689: 98.6272% ( 6) 00:08:47.069 9376.689 - 9427.102: 98.6671% ( 8) 00:08:47.069 9427.102 - 9477.514: 98.6971% ( 6) 00:08:47.069 9477.514 - 9527.926: 98.7320% ( 7) 00:08:47.069 9527.926 - 9578.338: 98.7670% ( 7) 00:08:47.069 9578.338 - 9628.751: 98.7969% ( 6) 00:08:47.069 9628.751 - 9679.163: 98.8319% ( 7) 00:08:47.069 9679.163 - 9729.575: 98.8668% ( 7) 00:08:47.069 9729.575 - 9779.988: 98.9018% ( 7) 00:08:47.069 9779.988 - 9830.400: 98.9367% ( 7) 00:08:47.069 9830.400 - 9880.812: 98.9667% ( 6) 00:08:47.069 9880.812 - 9931.225: 99.0016% ( 7) 00:08:47.069 9931.225 - 9981.637: 99.0315% ( 6) 00:08:47.069 9981.637 - 10032.049: 99.0415% ( 2) 00:08:47.069 10032.049 - 10082.462: 99.0815% ( 8) 00:08:47.069 10082.462 - 10132.874: 99.1014% ( 4) 00:08:47.069 10132.874 - 10183.286: 99.1114% ( 2) 00:08:47.069 10183.286 - 10233.698: 99.1264% ( 3) 00:08:47.069 10233.698 - 10284.111: 99.1414% ( 3) 00:08:47.069 10284.111 - 10334.523: 99.1613% ( 4) 00:08:47.069 10334.523 - 10384.935: 99.1763% ( 3) 00:08:47.069 10384.935 - 10435.348: 99.1963% ( 4) 00:08:47.069 10435.348 - 10485.760: 99.2113% ( 3) 00:08:47.069 10485.760 - 10536.172: 99.2262% ( 3) 00:08:47.069 10536.172 - 10586.585: 99.2412% ( 3) 00:08:47.069 10586.585 - 10636.997: 99.2562% ( 3) 00:08:47.069 10636.997 - 10687.409: 99.2762% ( 4) 00:08:47.069 10687.409 - 10737.822: 99.2911% ( 3) 00:08:47.069 10737.822 - 10788.234: 99.3111% ( 4) 00:08:47.069 10788.234 - 10838.646: 99.3261% ( 3) 00:08:47.069 10838.646 - 10889.058: 99.3411% ( 3) 00:08:47.069 10889.058 - 10939.471: 99.3560% ( 3) 00:08:47.069 10939.471 - 10989.883: 99.3610% ( 1) 00:08:47.069 21072.345 - 21173.169: 99.3660% ( 1) 00:08:47.069 21173.169 - 21273.994: 99.3860% ( 4) 00:08:47.069 21273.994 - 21374.818: 99.4060% ( 4) 00:08:47.069 21374.818 - 21475.643: 99.4259% ( 4) 00:08:47.069 21475.643 - 21576.468: 99.4459% ( 4) 00:08:47.069 21576.468 - 21677.292: 99.4659% ( 4) 00:08:47.069 21677.292 - 21778.117: 99.4858% ( 4) 00:08:47.069 21778.117 - 21878.942: 99.5058% ( 4) 00:08:47.069 21878.942 - 21979.766: 99.5258% ( 4) 00:08:47.069 21979.766 - 22080.591: 99.5457% ( 4) 00:08:47.069 22080.591 - 22181.415: 99.5657% ( 4) 00:08:47.069 22181.415 - 22282.240: 99.5907% ( 5) 00:08:47.069 22282.240 - 22383.065: 99.6106% ( 4) 00:08:47.069 22383.065 - 22483.889: 99.6306% ( 4) 00:08:47.069 22483.889 - 22584.714: 99.6506% ( 4) 00:08:47.069 22584.714 - 22685.538: 99.6705% ( 4) 00:08:47.069 22685.538 - 22786.363: 99.6805% ( 2) 00:08:47.069 25508.628 - 25609.452: 99.7005% ( 4) 00:08:47.069 25609.452 - 25710.277: 99.7204% ( 4) 00:08:47.069 25710.277 - 25811.102: 99.7454% ( 5) 00:08:47.069 25811.102 - 26012.751: 99.7853% ( 8) 00:08:47.069 26012.751 - 26214.400: 99.8253% ( 8) 00:08:47.069 26214.400 - 26416.049: 99.8652% ( 8) 00:08:47.069 26416.049 - 26617.698: 99.9052% ( 8) 00:08:47.069 26617.698 - 26819.348: 99.9501% ( 9) 00:08:47.069 26819.348 - 27020.997: 99.9900% ( 8) 00:08:47.069 27020.997 - 27222.646: 100.0000% ( 2) 00:08:47.069 00:08:47.069 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:47.069 ============================================================================== 00:08:47.069 Range in us Cumulative IO count 00:08:47.069 5595.766 - 5620.972: 0.0050% ( 1) 00:08:47.069 5620.972 - 5646.178: 0.2396% ( 47) 00:08:47.069 5646.178 - 5671.385: 0.6689% ( 86) 00:08:47.069 5671.385 - 5696.591: 1.2230% ( 111) 00:08:47.069 5696.591 - 5721.797: 2.2214% ( 200) 00:08:47.069 5721.797 - 5747.003: 3.5443% ( 265) 00:08:47.069 5747.003 - 5772.209: 5.1268% ( 317) 00:08:47.069 5772.209 - 5797.415: 7.0887% ( 393) 00:08:47.069 5797.415 - 5822.622: 9.2153% ( 426) 00:08:47.069 5822.622 - 5847.828: 11.6214% ( 482) 00:08:47.070 5847.828 - 5873.034: 14.1623% ( 509) 00:08:47.070 5873.034 - 5898.240: 16.7382% ( 516) 00:08:47.070 5898.240 - 5923.446: 19.3890% ( 531) 00:08:47.070 5923.446 - 5948.652: 22.0996% ( 543) 00:08:47.070 5948.652 - 5973.858: 24.7454% ( 530) 00:08:47.070 5973.858 - 5999.065: 27.5310% ( 558) 00:08:47.070 5999.065 - 6024.271: 30.2965% ( 554) 00:08:47.070 6024.271 - 6049.477: 33.1420% ( 570) 00:08:47.070 6049.477 - 6074.683: 36.0074% ( 574) 00:08:47.070 6074.683 - 6099.889: 38.8429% ( 568) 00:08:47.070 6099.889 - 6125.095: 41.6434% ( 561) 00:08:47.070 6125.095 - 6150.302: 44.4189% ( 556) 00:08:47.070 6150.302 - 6175.508: 47.2244% ( 562) 00:08:47.070 6175.508 - 6200.714: 50.0599% ( 568) 00:08:47.070 6200.714 - 6225.920: 52.8005% ( 549) 00:08:47.070 6225.920 - 6251.126: 55.5911% ( 559) 00:08:47.070 6251.126 - 6276.332: 58.3566% ( 554) 00:08:47.070 6276.332 - 6301.538: 61.1322% ( 556) 00:08:47.070 6301.538 - 6326.745: 63.9377% ( 562) 00:08:47.070 6326.745 - 6351.951: 66.7232% ( 558) 00:08:47.070 6351.951 - 6377.157: 69.5387% ( 564) 00:08:47.070 6377.157 - 6402.363: 72.3742% ( 568) 00:08:47.070 6402.363 - 6427.569: 75.1198% ( 550) 00:08:47.070 6427.569 - 6452.775: 77.8454% ( 546) 00:08:47.070 6452.775 - 6503.188: 82.9822% ( 1029) 00:08:47.070 6503.188 - 6553.600: 87.2804% ( 861) 00:08:47.070 6553.600 - 6604.012: 90.4702% ( 639) 00:08:47.070 6604.012 - 6654.425: 92.5120% ( 409) 00:08:47.070 6654.425 - 6704.837: 93.6801% ( 234) 00:08:47.070 6704.837 - 6755.249: 94.3440% ( 133) 00:08:47.070 6755.249 - 6805.662: 94.7784% ( 87) 00:08:47.070 6805.662 - 6856.074: 95.1178% ( 68) 00:08:47.070 6856.074 - 6906.486: 95.3824% ( 53) 00:08:47.070 6906.486 - 6956.898: 95.6569% ( 55) 00:08:47.070 6956.898 - 7007.311: 95.8367% ( 36) 00:08:47.070 7007.311 - 7057.723: 96.0313% ( 39) 00:08:47.070 7057.723 - 7108.135: 96.2360% ( 41) 00:08:47.070 7108.135 - 7158.548: 96.4207% ( 37) 00:08:47.070 7158.548 - 7208.960: 96.5805% ( 32) 00:08:47.070 7208.960 - 7259.372: 96.7153% ( 27) 00:08:47.070 7259.372 - 7309.785: 96.8351% ( 24) 00:08:47.070 7309.785 - 7360.197: 96.9549% ( 24) 00:08:47.070 7360.197 - 7410.609: 97.0547% ( 20) 00:08:47.070 7410.609 - 7461.022: 97.1546% ( 20) 00:08:47.070 7461.022 - 7511.434: 97.2644% ( 22) 00:08:47.070 7511.434 - 7561.846: 97.3542% ( 18) 00:08:47.070 7561.846 - 7612.258: 97.4441% ( 18) 00:08:47.070 7612.258 - 7662.671: 97.5240% ( 16) 00:08:47.070 7662.671 - 7713.083: 97.5739% ( 10) 00:08:47.070 7713.083 - 7763.495: 97.6238% ( 10) 00:08:47.070 7763.495 - 7813.908: 97.6687% ( 9) 00:08:47.070 7813.908 - 7864.320: 97.7087% ( 8) 00:08:47.070 7864.320 - 7914.732: 97.7536% ( 9) 00:08:47.070 7914.732 - 7965.145: 97.7835% ( 6) 00:08:47.070 7965.145 - 8015.557: 97.8185% ( 7) 00:08:47.070 8015.557 - 8065.969: 97.8484% ( 6) 00:08:47.070 8065.969 - 8116.382: 97.8784% ( 6) 00:08:47.070 8116.382 - 8166.794: 97.9083% ( 6) 00:08:47.070 8166.794 - 8217.206: 97.9283% ( 4) 00:08:47.070 8217.206 - 8267.618: 97.9583% ( 6) 00:08:47.070 8267.618 - 8318.031: 97.9832% ( 5) 00:08:47.070 8318.031 - 8368.443: 97.9982% ( 3) 00:08:47.070 8368.443 - 8418.855: 98.0132% ( 3) 00:08:47.070 8418.855 - 8469.268: 98.0331% ( 4) 00:08:47.070 8469.268 - 8519.680: 98.0631% ( 6) 00:08:47.070 8519.680 - 8570.092: 98.0931% ( 6) 00:08:47.070 8570.092 - 8620.505: 98.1230% ( 6) 00:08:47.070 8620.505 - 8670.917: 98.1530% ( 6) 00:08:47.070 8670.917 - 8721.329: 98.1679% ( 3) 00:08:47.070 8721.329 - 8771.742: 98.1929% ( 5) 00:08:47.070 8771.742 - 8822.154: 98.2079% ( 3) 00:08:47.070 8822.154 - 8872.566: 98.2228% ( 3) 00:08:47.070 8872.566 - 8922.978: 98.2378% ( 3) 00:08:47.070 8922.978 - 8973.391: 98.2528% ( 3) 00:08:47.070 8973.391 - 9023.803: 98.2678% ( 3) 00:08:47.070 9023.803 - 9074.215: 98.2827% ( 3) 00:08:47.070 9074.215 - 9124.628: 98.3027% ( 4) 00:08:47.070 9124.628 - 9175.040: 98.3377% ( 7) 00:08:47.070 9175.040 - 9225.452: 98.3826% ( 9) 00:08:47.070 9225.452 - 9275.865: 98.4325% ( 10) 00:08:47.070 9275.865 - 9326.277: 98.4974% ( 13) 00:08:47.070 9326.277 - 9376.689: 98.5573% ( 12) 00:08:47.070 9376.689 - 9427.102: 98.6322% ( 15) 00:08:47.070 9427.102 - 9477.514: 98.6721% ( 8) 00:08:47.070 9477.514 - 9527.926: 98.7171% ( 9) 00:08:47.070 9527.926 - 9578.338: 98.7520% ( 7) 00:08:47.070 9578.338 - 9628.751: 98.7969% ( 9) 00:08:47.070 9628.751 - 9679.163: 98.8319% ( 7) 00:08:47.070 9679.163 - 9729.575: 98.8718% ( 8) 00:08:47.070 9729.575 - 9779.988: 98.9167% ( 9) 00:08:47.070 9779.988 - 9830.400: 98.9517% ( 7) 00:08:47.070 9830.400 - 9880.812: 99.0016% ( 10) 00:08:47.070 9880.812 - 9931.225: 99.0365% ( 7) 00:08:47.070 9931.225 - 9981.637: 99.0865% ( 10) 00:08:47.070 9981.637 - 10032.049: 99.1264% ( 8) 00:08:47.070 10032.049 - 10082.462: 99.1663% ( 8) 00:08:47.070 10082.462 - 10132.874: 99.2013% ( 7) 00:08:47.070 10132.874 - 10183.286: 99.2262% ( 5) 00:08:47.070 10183.286 - 10233.698: 99.2412% ( 3) 00:08:47.070 10233.698 - 10284.111: 99.2512% ( 2) 00:08:47.070 10284.111 - 10334.523: 99.2612% ( 2) 00:08:47.070 10334.523 - 10384.935: 99.2712% ( 2) 00:08:47.070 10384.935 - 10435.348: 99.2812% ( 2) 00:08:47.070 10435.348 - 10485.760: 99.2861% ( 1) 00:08:47.070 10485.760 - 10536.172: 99.2961% ( 2) 00:08:47.070 10536.172 - 10586.585: 99.3061% ( 2) 00:08:47.070 10586.585 - 10636.997: 99.3161% ( 2) 00:08:47.070 10636.997 - 10687.409: 99.3211% ( 1) 00:08:47.070 10687.409 - 10737.822: 99.3311% ( 2) 00:08:47.070 10737.822 - 10788.234: 99.3411% ( 2) 00:08:47.070 10788.234 - 10838.646: 99.3460% ( 1) 00:08:47.070 10838.646 - 10889.058: 99.3560% ( 2) 00:08:47.070 10889.058 - 10939.471: 99.3610% ( 1) 00:08:47.070 19358.326 - 19459.151: 99.3760% ( 3) 00:08:47.070 19459.151 - 19559.975: 99.3960% ( 4) 00:08:47.070 19559.975 - 19660.800: 99.4159% ( 4) 00:08:47.070 19660.800 - 19761.625: 99.4359% ( 4) 00:08:47.070 19761.625 - 19862.449: 99.4559% ( 4) 00:08:47.070 19862.449 - 19963.274: 99.4808% ( 5) 00:08:47.070 19963.274 - 20064.098: 99.5008% ( 4) 00:08:47.070 20064.098 - 20164.923: 99.5208% ( 4) 00:08:47.070 20164.923 - 20265.748: 99.5407% ( 4) 00:08:47.070 20265.748 - 20366.572: 99.5607% ( 4) 00:08:47.070 20366.572 - 20467.397: 99.5807% ( 4) 00:08:47.070 20467.397 - 20568.222: 99.6056% ( 5) 00:08:47.070 20568.222 - 20669.046: 99.6256% ( 4) 00:08:47.070 20669.046 - 20769.871: 99.6456% ( 4) 00:08:47.070 20769.871 - 20870.695: 99.6655% ( 4) 00:08:47.070 20870.695 - 20971.520: 99.6805% ( 3) 00:08:47.070 23693.785 - 23794.609: 99.6955% ( 3) 00:08:47.070 23794.609 - 23895.434: 99.7155% ( 4) 00:08:47.070 23895.434 - 23996.258: 99.7354% ( 4) 00:08:47.070 23996.258 - 24097.083: 99.7554% ( 4) 00:08:47.070 24097.083 - 24197.908: 99.7754% ( 4) 00:08:47.070 24197.908 - 24298.732: 99.7953% ( 4) 00:08:47.070 24298.732 - 24399.557: 99.8203% ( 5) 00:08:47.070 24399.557 - 24500.382: 99.8403% ( 4) 00:08:47.070 24500.382 - 24601.206: 99.8552% ( 3) 00:08:47.070 24601.206 - 24702.031: 99.8802% ( 5) 00:08:47.070 24702.031 - 24802.855: 99.9002% ( 4) 00:08:47.070 24802.855 - 24903.680: 99.9201% ( 4) 00:08:47.070 24903.680 - 25004.505: 99.9401% ( 4) 00:08:47.070 25004.505 - 25105.329: 99.9651% ( 5) 00:08:47.070 25105.329 - 25206.154: 99.9850% ( 4) 00:08:47.070 25206.154 - 25306.978: 100.0000% ( 3) 00:08:47.070 00:08:47.070 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:47.070 ============================================================================== 00:08:47.070 Range in us Cumulative IO count 00:08:47.070 5545.354 - 5570.560: 0.0149% ( 3) 00:08:47.070 5570.560 - 5595.766: 0.0299% ( 3) 00:08:47.070 5595.766 - 5620.972: 0.0896% ( 12) 00:08:47.070 5620.972 - 5646.178: 0.2189% ( 26) 00:08:47.070 5646.178 - 5671.385: 0.5623% ( 69) 00:08:47.070 5671.385 - 5696.591: 1.3784% ( 164) 00:08:47.071 5696.591 - 5721.797: 2.3686% ( 199) 00:08:47.071 5721.797 - 5747.003: 3.5082% ( 229) 00:08:47.071 5747.003 - 5772.209: 5.0159% ( 303) 00:08:47.071 5772.209 - 5797.415: 6.8869% ( 376) 00:08:47.071 5797.415 - 5822.622: 9.1660% ( 458) 00:08:47.071 5822.622 - 5847.828: 11.5993% ( 489) 00:08:47.071 5847.828 - 5873.034: 14.1471% ( 512) 00:08:47.071 5873.034 - 5898.240: 16.7596% ( 525) 00:08:47.071 5898.240 - 5923.446: 19.2277% ( 496) 00:08:47.071 5923.446 - 5948.652: 21.9895% ( 555) 00:08:47.071 5948.652 - 5973.858: 24.6666% ( 538) 00:08:47.071 5973.858 - 5999.065: 27.3786% ( 545) 00:08:47.071 5999.065 - 6024.271: 30.1702% ( 561) 00:08:47.071 6024.271 - 6049.477: 32.9717% ( 563) 00:08:47.071 6049.477 - 6074.683: 35.7832% ( 565) 00:08:47.071 6074.683 - 6099.889: 38.5748% ( 561) 00:08:47.071 6099.889 - 6125.095: 41.3565% ( 559) 00:08:47.071 6125.095 - 6150.302: 44.1531% ( 562) 00:08:47.071 6150.302 - 6175.508: 46.9397% ( 560) 00:08:47.071 6175.508 - 6200.714: 49.7761% ( 570) 00:08:47.071 6200.714 - 6225.920: 52.5279% ( 553) 00:08:47.071 6225.920 - 6251.126: 55.3493% ( 567) 00:08:47.071 6251.126 - 6276.332: 58.1011% ( 553) 00:08:47.071 6276.332 - 6301.538: 60.8280% ( 548) 00:08:47.071 6301.538 - 6326.745: 63.6196% ( 561) 00:08:47.071 6326.745 - 6351.951: 66.4112% ( 561) 00:08:47.071 6351.951 - 6377.157: 69.1829% ( 557) 00:08:47.071 6377.157 - 6402.363: 71.9944% ( 565) 00:08:47.071 6402.363 - 6427.569: 74.7711% ( 558) 00:08:47.071 6427.569 - 6452.775: 77.5229% ( 553) 00:08:47.071 6452.775 - 6503.188: 82.5836% ( 1017) 00:08:47.071 6503.188 - 6553.600: 86.8183% ( 851) 00:08:47.071 6553.600 - 6604.012: 90.1523% ( 670) 00:08:47.071 6604.012 - 6654.425: 92.2472% ( 421) 00:08:47.071 6654.425 - 6704.837: 93.4216% ( 236) 00:08:47.071 6704.837 - 6755.249: 94.1381% ( 144) 00:08:47.071 6755.249 - 6805.662: 94.6059% ( 94) 00:08:47.071 6805.662 - 6856.074: 94.9691% ( 73) 00:08:47.071 6856.074 - 6906.486: 95.2677% ( 60) 00:08:47.071 6906.486 - 6956.898: 95.5265% ( 52) 00:08:47.071 6956.898 - 7007.311: 95.7355% ( 42) 00:08:47.071 7007.311 - 7057.723: 95.9196% ( 37) 00:08:47.071 7057.723 - 7108.135: 96.1037% ( 37) 00:08:47.071 7108.135 - 7158.548: 96.2729% ( 34) 00:08:47.071 7158.548 - 7208.960: 96.4172% ( 29) 00:08:47.071 7208.960 - 7259.372: 96.5516% ( 27) 00:08:47.071 7259.372 - 7309.785: 96.6909% ( 28) 00:08:47.071 7309.785 - 7360.197: 96.8103% ( 24) 00:08:47.071 7360.197 - 7410.609: 96.9248% ( 23) 00:08:47.071 7410.609 - 7461.022: 97.0293% ( 21) 00:08:47.071 7461.022 - 7511.434: 97.1288% ( 20) 00:08:47.071 7511.434 - 7561.846: 97.2034% ( 15) 00:08:47.071 7561.846 - 7612.258: 97.2830% ( 16) 00:08:47.071 7612.258 - 7662.671: 97.3428% ( 12) 00:08:47.071 7662.671 - 7713.083: 97.3975% ( 11) 00:08:47.071 7713.083 - 7763.495: 97.4423% ( 9) 00:08:47.071 7763.495 - 7813.908: 97.5070% ( 13) 00:08:47.071 7813.908 - 7864.320: 97.5667% ( 12) 00:08:47.071 7864.320 - 7914.732: 97.6164% ( 10) 00:08:47.071 7914.732 - 7965.145: 97.6562% ( 8) 00:08:47.071 7965.145 - 8015.557: 97.6961% ( 8) 00:08:47.071 8015.557 - 8065.969: 97.7359% ( 8) 00:08:47.071 8065.969 - 8116.382: 97.7757% ( 8) 00:08:47.071 8116.382 - 8166.794: 97.8205% ( 9) 00:08:47.071 8166.794 - 8217.206: 97.8603% ( 8) 00:08:47.071 8217.206 - 8267.618: 97.9051% ( 9) 00:08:47.071 8267.618 - 8318.031: 97.9200% ( 3) 00:08:47.071 8318.031 - 8368.443: 97.9349% ( 3) 00:08:47.071 8368.443 - 8418.855: 97.9449% ( 2) 00:08:47.071 8418.855 - 8469.268: 97.9598% ( 3) 00:08:47.071 8469.268 - 8519.680: 97.9797% ( 4) 00:08:47.071 8519.680 - 8570.092: 98.0195% ( 8) 00:08:47.071 8570.092 - 8620.505: 98.0543% ( 7) 00:08:47.071 8620.505 - 8670.917: 98.1190% ( 13) 00:08:47.071 8670.917 - 8721.329: 98.1489% ( 6) 00:08:47.071 8721.329 - 8771.742: 98.2036% ( 11) 00:08:47.071 8771.742 - 8822.154: 98.2484% ( 9) 00:08:47.071 8822.154 - 8872.566: 98.3081% ( 12) 00:08:47.071 8872.566 - 8922.978: 98.3579% ( 10) 00:08:47.071 8922.978 - 8973.391: 98.3977% ( 8) 00:08:47.071 8973.391 - 9023.803: 98.4425% ( 9) 00:08:47.071 9023.803 - 9074.215: 98.4823% ( 8) 00:08:47.071 9074.215 - 9124.628: 98.5171% ( 7) 00:08:47.071 9124.628 - 9175.040: 98.5520% ( 7) 00:08:47.071 9175.040 - 9225.452: 98.5868% ( 7) 00:08:47.071 9225.452 - 9275.865: 98.6216% ( 7) 00:08:47.071 9275.865 - 9326.277: 98.6564% ( 7) 00:08:47.071 9326.277 - 9376.689: 98.6913% ( 7) 00:08:47.071 9376.689 - 9427.102: 98.7510% ( 12) 00:08:47.071 9427.102 - 9477.514: 98.8008% ( 10) 00:08:47.071 9477.514 - 9527.926: 98.8555% ( 11) 00:08:47.071 9527.926 - 9578.338: 98.9053% ( 10) 00:08:47.071 9578.338 - 9628.751: 98.9500% ( 9) 00:08:47.071 9628.751 - 9679.163: 98.9849% ( 7) 00:08:47.071 9679.163 - 9729.575: 99.0247% ( 8) 00:08:47.071 9729.575 - 9779.988: 99.0545% ( 6) 00:08:47.071 9779.988 - 9830.400: 99.0894% ( 7) 00:08:47.071 9830.400 - 9880.812: 99.1242% ( 7) 00:08:47.071 9880.812 - 9931.225: 99.1590% ( 7) 00:08:47.071 9931.225 - 9981.637: 99.1939% ( 7) 00:08:47.071 9981.637 - 10032.049: 99.2337% ( 8) 00:08:47.071 10032.049 - 10082.462: 99.2635% ( 6) 00:08:47.071 10082.462 - 10132.874: 99.2984% ( 7) 00:08:47.071 10132.874 - 10183.286: 99.3232% ( 5) 00:08:47.071 10183.286 - 10233.698: 99.3382% ( 3) 00:08:47.071 10233.698 - 10284.111: 99.3531% ( 3) 00:08:47.071 10284.111 - 10334.523: 99.3631% ( 2) 00:08:47.071 14216.271 - 14317.095: 99.3730% ( 2) 00:08:47.071 14317.095 - 14417.920: 99.3979% ( 5) 00:08:47.071 14417.920 - 14518.745: 99.4178% ( 4) 00:08:47.071 14518.745 - 14619.569: 99.4377% ( 4) 00:08:47.071 14619.569 - 14720.394: 99.4576% ( 4) 00:08:47.071 14720.394 - 14821.218: 99.4775% ( 4) 00:08:47.071 14821.218 - 14922.043: 99.5024% ( 5) 00:08:47.071 14922.043 - 15022.868: 99.5223% ( 4) 00:08:47.071 15022.868 - 15123.692: 99.5422% ( 4) 00:08:47.071 15123.692 - 15224.517: 99.5571% ( 3) 00:08:47.071 15224.517 - 15325.342: 99.5820% ( 5) 00:08:47.071 15325.342 - 15426.166: 99.6019% ( 4) 00:08:47.071 15426.166 - 15526.991: 99.6218% ( 4) 00:08:47.071 15526.991 - 15627.815: 99.6417% ( 4) 00:08:47.071 15627.815 - 15728.640: 99.6616% ( 4) 00:08:47.071 15728.640 - 15829.465: 99.6815% ( 4) 00:08:47.071 18753.378 - 18854.203: 99.7014% ( 4) 00:08:47.071 18854.203 - 18955.028: 99.7213% ( 4) 00:08:47.071 18955.028 - 19055.852: 99.7462% ( 5) 00:08:47.071 19055.852 - 19156.677: 99.7661% ( 4) 00:08:47.071 19156.677 - 19257.502: 99.7860% ( 4) 00:08:47.071 19257.502 - 19358.326: 99.8059% ( 4) 00:08:47.071 19358.326 - 19459.151: 99.8258% ( 4) 00:08:47.071 19459.151 - 19559.975: 99.8507% ( 5) 00:08:47.071 19559.975 - 19660.800: 99.8706% ( 4) 00:08:47.071 19660.800 - 19761.625: 99.8905% ( 4) 00:08:47.071 19761.625 - 19862.449: 99.9104% ( 4) 00:08:47.071 19862.449 - 19963.274: 99.9303% ( 4) 00:08:47.071 19963.274 - 20064.098: 99.9502% ( 4) 00:08:47.071 20064.098 - 20164.923: 99.9751% ( 5) 00:08:47.072 20164.923 - 20265.748: 99.9950% ( 4) 00:08:47.072 20265.748 - 20366.572: 100.0000% ( 1) 00:08:47.072 00:08:47.072 03:59:34 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:48.003 Initializing NVMe Controllers 00:08:48.003 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:48.003 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:48.003 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:48.003 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:48.003 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:48.003 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:48.003 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:48.003 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:48.003 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:48.003 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:48.003 Initialization complete. Launching workers. 00:08:48.003 ======================================================== 00:08:48.003 Latency(us) 00:08:48.003 Device Information : IOPS MiB/s Average min max 00:08:48.003 PCIE (0000:00:10.0) NSID 1 from core 0: 17197.24 201.53 7451.91 5734.42 31930.16 00:08:48.003 PCIE (0000:00:11.0) NSID 1 from core 0: 17197.24 201.53 7440.23 5935.57 30053.70 00:08:48.003 PCIE (0000:00:13.0) NSID 1 from core 0: 17197.24 201.53 7428.57 5920.55 28601.66 00:08:48.003 PCIE (0000:00:12.0) NSID 1 from core 0: 17197.24 201.53 7416.65 5897.16 26910.24 00:08:48.003 PCIE (0000:00:12.0) NSID 2 from core 0: 17197.24 201.53 7404.75 6055.46 25235.96 00:08:48.003 PCIE (0000:00:12.0) NSID 3 from core 0: 17261.17 202.28 7365.44 5968.76 19883.64 00:08:48.003 ======================================================== 00:08:48.003 Total : 103247.36 1209.93 7417.89 5734.42 31930.16 00:08:48.003 00:08:48.003 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:48.003 ================================================================================= 00:08:48.003 1.00000% : 6200.714us 00:08:48.003 10.00000% : 6553.600us 00:08:48.003 25.00000% : 6856.074us 00:08:48.003 50.00000% : 7158.548us 00:08:48.003 75.00000% : 7713.083us 00:08:48.003 90.00000% : 8318.031us 00:08:48.003 95.00000% : 8771.742us 00:08:48.003 98.00000% : 9275.865us 00:08:48.003 99.00000% : 9779.988us 00:08:48.003 99.50000% : 26617.698us 00:08:48.003 99.90000% : 31457.280us 00:08:48.003 99.99000% : 32062.228us 00:08:48.003 99.99900% : 32062.228us 00:08:48.003 99.99990% : 32062.228us 00:08:48.003 99.99999% : 32062.228us 00:08:48.003 00:08:48.003 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:48.003 ================================================================================= 00:08:48.003 1.00000% : 6276.332us 00:08:48.003 10.00000% : 6654.425us 00:08:48.003 25.00000% : 6856.074us 00:08:48.003 50.00000% : 7158.548us 00:08:48.003 75.00000% : 7662.671us 00:08:48.003 90.00000% : 8267.618us 00:08:48.003 95.00000% : 8570.092us 00:08:48.003 98.00000% : 9376.689us 00:08:48.003 99.00000% : 9931.225us 00:08:48.003 99.50000% : 24903.680us 00:08:48.003 99.90000% : 29642.437us 00:08:48.003 99.99000% : 30045.735us 00:08:48.003 99.99900% : 30247.385us 00:08:48.003 99.99990% : 30247.385us 00:08:48.003 99.99999% : 30247.385us 00:08:48.003 00:08:48.003 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:48.003 ================================================================================= 00:08:48.003 1.00000% : 6276.332us 00:08:48.003 10.00000% : 6654.425us 00:08:48.003 25.00000% : 6856.074us 00:08:48.003 50.00000% : 7158.548us 00:08:48.003 75.00000% : 7713.083us 00:08:48.003 90.00000% : 8267.618us 00:08:48.003 95.00000% : 8620.505us 00:08:48.003 98.00000% : 9275.865us 00:08:48.003 99.00000% : 9729.575us 00:08:48.003 99.50000% : 23492.135us 00:08:48.003 99.90000% : 28230.892us 00:08:48.003 99.99000% : 28634.191us 00:08:48.003 99.99900% : 28634.191us 00:08:48.003 99.99990% : 28634.191us 00:08:48.003 99.99999% : 28634.191us 00:08:48.003 00:08:48.003 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:48.003 ================================================================================= 00:08:48.003 1.00000% : 6301.538us 00:08:48.003 10.00000% : 6654.425us 00:08:48.003 25.00000% : 6856.074us 00:08:48.003 50.00000% : 7158.548us 00:08:48.003 75.00000% : 7713.083us 00:08:48.003 90.00000% : 8267.618us 00:08:48.003 95.00000% : 8620.505us 00:08:48.003 98.00000% : 9275.865us 00:08:48.003 99.00000% : 9628.751us 00:08:48.003 99.50000% : 21677.292us 00:08:48.003 99.90000% : 26617.698us 00:08:48.003 99.99000% : 27020.997us 00:08:48.003 99.99900% : 27020.997us 00:08:48.003 99.99990% : 27020.997us 00:08:48.003 99.99999% : 27020.997us 00:08:48.003 00:08:48.003 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:48.003 ================================================================================= 00:08:48.003 1.00000% : 6301.538us 00:08:48.003 10.00000% : 6654.425us 00:08:48.003 25.00000% : 6856.074us 00:08:48.003 50.00000% : 7158.548us 00:08:48.003 75.00000% : 7713.083us 00:08:48.003 90.00000% : 8267.618us 00:08:48.003 95.00000% : 8670.917us 00:08:48.003 98.00000% : 9275.865us 00:08:48.003 99.00000% : 9679.163us 00:08:48.003 99.50000% : 20064.098us 00:08:48.003 99.90000% : 24702.031us 00:08:48.003 99.99000% : 25206.154us 00:08:48.003 99.99900% : 25306.978us 00:08:48.003 99.99990% : 25306.978us 00:08:48.003 99.99999% : 25306.978us 00:08:48.003 00:08:48.003 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:48.003 ================================================================================= 00:08:48.003 1.00000% : 6301.538us 00:08:48.003 10.00000% : 6654.425us 00:08:48.003 25.00000% : 6856.074us 00:08:48.003 50.00000% : 7158.548us 00:08:48.003 75.00000% : 7713.083us 00:08:48.003 90.00000% : 8267.618us 00:08:48.003 95.00000% : 8620.505us 00:08:48.003 98.00000% : 9275.865us 00:08:48.003 99.00000% : 9427.102us 00:08:48.003 99.50000% : 14417.920us 00:08:48.003 99.90000% : 19559.975us 00:08:48.003 99.99000% : 19862.449us 00:08:48.003 99.99900% : 19963.274us 00:08:48.003 99.99990% : 19963.274us 00:08:48.003 99.99999% : 19963.274us 00:08:48.003 00:08:48.003 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:48.003 ============================================================================== 00:08:48.003 Range in us Cumulative IO count 00:08:48.003 5721.797 - 5747.003: 0.0058% ( 1) 00:08:48.003 5822.622 - 5847.828: 0.0232% ( 3) 00:08:48.003 5847.828 - 5873.034: 0.0349% ( 2) 00:08:48.003 5873.034 - 5898.240: 0.0523% ( 3) 00:08:48.003 5898.240 - 5923.446: 0.0755% ( 4) 00:08:48.003 5923.446 - 5948.652: 0.1046% ( 5) 00:08:48.003 5948.652 - 5973.858: 0.1452% ( 7) 00:08:48.003 5973.858 - 5999.065: 0.2091% ( 11) 00:08:48.003 5999.065 - 6024.271: 0.2556% ( 8) 00:08:48.003 6024.271 - 6049.477: 0.3020% ( 8) 00:08:48.003 6049.477 - 6074.683: 0.3369% ( 6) 00:08:48.003 6074.683 - 6099.889: 0.4414% ( 18) 00:08:48.003 6099.889 - 6125.095: 0.5053% ( 11) 00:08:48.003 6125.095 - 6150.302: 0.6331% ( 22) 00:08:48.003 6150.302 - 6175.508: 0.8132% ( 31) 00:08:48.003 6175.508 - 6200.714: 1.0223% ( 36) 00:08:48.003 6200.714 - 6225.920: 1.2546% ( 40) 00:08:48.003 6225.920 - 6251.126: 1.5683% ( 54) 00:08:48.004 6251.126 - 6276.332: 1.9807% ( 71) 00:08:48.004 6276.332 - 6301.538: 2.4396% ( 79) 00:08:48.004 6301.538 - 6326.745: 2.9333% ( 85) 00:08:48.004 6326.745 - 6351.951: 3.5258% ( 102) 00:08:48.004 6351.951 - 6377.157: 4.1647% ( 110) 00:08:48.004 6377.157 - 6402.363: 4.6817% ( 89) 00:08:48.004 6402.363 - 6427.569: 5.3903% ( 122) 00:08:48.004 6427.569 - 6452.775: 6.0816% ( 119) 00:08:48.004 6452.775 - 6503.188: 7.7312% ( 284) 00:08:48.004 6503.188 - 6553.600: 10.0488% ( 399) 00:08:48.004 6553.600 - 6604.012: 12.0411% ( 343) 00:08:48.004 6604.012 - 6654.425: 14.6375% ( 447) 00:08:48.004 6654.425 - 6704.837: 17.2630% ( 452) 00:08:48.004 6704.837 - 6755.249: 20.5681% ( 569) 00:08:48.004 6755.249 - 6805.662: 24.2681% ( 637) 00:08:48.004 6805.662 - 6856.074: 27.7300% ( 596) 00:08:48.004 6856.074 - 6906.486: 31.6450% ( 674) 00:08:48.004 6906.486 - 6956.898: 35.7110% ( 700) 00:08:48.004 6956.898 - 7007.311: 39.7421% ( 694) 00:08:48.004 7007.311 - 7057.723: 43.2853% ( 610) 00:08:48.004 7057.723 - 7108.135: 47.1654% ( 668) 00:08:48.004 7108.135 - 7158.548: 50.7261% ( 613) 00:08:48.004 7158.548 - 7208.960: 53.8975% ( 546) 00:08:48.004 7208.960 - 7259.372: 56.9470% ( 525) 00:08:48.004 7259.372 - 7309.785: 59.7409% ( 481) 00:08:48.004 7309.785 - 7360.197: 62.4535% ( 467) 00:08:48.004 7360.197 - 7410.609: 64.7305% ( 392) 00:08:48.004 7410.609 - 7461.022: 67.0249% ( 395) 00:08:48.004 7461.022 - 7511.434: 68.7442% ( 296) 00:08:48.004 7511.434 - 7561.846: 70.6901% ( 335) 00:08:48.004 7561.846 - 7612.258: 72.5314% ( 317) 00:08:48.004 7612.258 - 7662.671: 74.3494% ( 313) 00:08:48.004 7662.671 - 7713.083: 76.2372% ( 325) 00:08:48.004 7713.083 - 7763.495: 77.8404% ( 276) 00:08:48.004 7763.495 - 7813.908: 79.7398% ( 327) 00:08:48.004 7813.908 - 7864.320: 81.3139% ( 271) 00:08:48.004 7864.320 - 7914.732: 82.8415% ( 263) 00:08:48.004 7914.732 - 7965.145: 84.1194% ( 220) 00:08:48.004 7965.145 - 8015.557: 85.1766% ( 182) 00:08:48.004 8015.557 - 8065.969: 86.1408% ( 166) 00:08:48.004 8065.969 - 8116.382: 87.0295% ( 153) 00:08:48.004 8116.382 - 8166.794: 88.0402% ( 174) 00:08:48.004 8166.794 - 8217.206: 89.0393% ( 172) 00:08:48.004 8217.206 - 8267.618: 89.9105% ( 150) 00:08:48.004 8267.618 - 8318.031: 90.5263% ( 106) 00:08:48.004 8318.031 - 8368.443: 91.0955% ( 98) 00:08:48.004 8368.443 - 8418.855: 91.5950% ( 86) 00:08:48.004 8418.855 - 8469.268: 92.1468% ( 95) 00:08:48.004 8469.268 - 8519.680: 92.7277% ( 100) 00:08:48.004 8519.680 - 8570.092: 93.2737% ( 94) 00:08:48.004 8570.092 - 8620.505: 93.8546% ( 100) 00:08:48.004 8620.505 - 8670.917: 94.2612% ( 70) 00:08:48.004 8670.917 - 8721.329: 94.7200% ( 79) 00:08:48.004 8721.329 - 8771.742: 95.1441% ( 73) 00:08:48.004 8771.742 - 8822.154: 95.4809% ( 58) 00:08:48.004 8822.154 - 8872.566: 95.8062% ( 56) 00:08:48.004 8872.566 - 8922.978: 96.1199% ( 54) 00:08:48.004 8922.978 - 8973.391: 96.5091% ( 67) 00:08:48.004 8973.391 - 9023.803: 96.8692% ( 62) 00:08:48.004 9023.803 - 9074.215: 97.1712% ( 52) 00:08:48.004 9074.215 - 9124.628: 97.3629% ( 33) 00:08:48.004 9124.628 - 9175.040: 97.6243% ( 45) 00:08:48.004 9175.040 - 9225.452: 97.8160% ( 33) 00:08:48.004 9225.452 - 9275.865: 98.0077% ( 33) 00:08:48.004 9275.865 - 9326.277: 98.1877% ( 31) 00:08:48.004 9326.277 - 9376.689: 98.3678% ( 31) 00:08:48.004 9376.689 - 9427.102: 98.5304% ( 28) 00:08:48.004 9427.102 - 9477.514: 98.6757% ( 25) 00:08:48.004 9477.514 - 9527.926: 98.7860% ( 19) 00:08:48.004 9527.926 - 9578.338: 98.8964% ( 19) 00:08:48.004 9578.338 - 9628.751: 98.9254% ( 5) 00:08:48.004 9628.751 - 9679.163: 98.9312% ( 1) 00:08:48.004 9679.163 - 9729.575: 98.9603% ( 5) 00:08:48.004 9729.575 - 9779.988: 99.0009% ( 7) 00:08:48.004 9779.988 - 9830.400: 99.0242% ( 4) 00:08:48.004 9830.400 - 9880.812: 99.0590% ( 6) 00:08:48.004 9880.812 - 9931.225: 99.0939% ( 6) 00:08:48.004 9931.225 - 9981.637: 99.1229% ( 5) 00:08:48.004 9981.637 - 10032.049: 99.1520% ( 5) 00:08:48.004 10032.049 - 10082.462: 99.1984% ( 8) 00:08:48.004 10082.462 - 10132.874: 99.2275% ( 5) 00:08:48.004 10233.698 - 10284.111: 99.2333% ( 1) 00:08:48.004 10284.111 - 10334.523: 99.2507% ( 3) 00:08:48.004 10334.523 - 10384.935: 99.2565% ( 1) 00:08:48.004 25508.628 - 25609.452: 99.2914% ( 6) 00:08:48.004 25609.452 - 25710.277: 99.3204% ( 5) 00:08:48.004 25710.277 - 25811.102: 99.3378% ( 3) 00:08:48.004 25811.102 - 26012.751: 99.3727% ( 6) 00:08:48.004 26012.751 - 26214.400: 99.4250% ( 9) 00:08:48.004 26214.400 - 26416.049: 99.4656% ( 7) 00:08:48.004 26416.049 - 26617.698: 99.5063% ( 7) 00:08:48.004 26617.698 - 26819.348: 99.5469% ( 7) 00:08:48.004 26819.348 - 27020.997: 99.5934% ( 8) 00:08:48.004 27020.997 - 27222.646: 99.6283% ( 6) 00:08:48.004 30045.735 - 30247.385: 99.6341% ( 1) 00:08:48.004 30247.385 - 30449.034: 99.6805% ( 8) 00:08:48.004 30449.034 - 30650.683: 99.7270% ( 8) 00:08:48.004 30650.683 - 30852.332: 99.7735% ( 8) 00:08:48.004 30852.332 - 31053.982: 99.8141% ( 7) 00:08:48.004 31053.982 - 31255.631: 99.8606% ( 8) 00:08:48.004 31255.631 - 31457.280: 99.9071% ( 8) 00:08:48.004 31457.280 - 31658.929: 99.9419% ( 6) 00:08:48.004 31658.929 - 31860.578: 99.9884% ( 8) 00:08:48.004 31860.578 - 32062.228: 100.0000% ( 2) 00:08:48.004 00:08:48.004 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:48.004 ============================================================================== 00:08:48.004 Range in us Cumulative IO count 00:08:48.004 5923.446 - 5948.652: 0.0058% ( 1) 00:08:48.004 5948.652 - 5973.858: 0.0174% ( 2) 00:08:48.004 5999.065 - 6024.271: 0.0349% ( 3) 00:08:48.004 6024.271 - 6049.477: 0.0523% ( 3) 00:08:48.004 6049.477 - 6074.683: 0.0755% ( 4) 00:08:48.004 6074.683 - 6099.889: 0.1046% ( 5) 00:08:48.004 6099.889 - 6125.095: 0.1568% ( 9) 00:08:48.004 6125.095 - 6150.302: 0.2091% ( 9) 00:08:48.004 6150.302 - 6175.508: 0.3659% ( 27) 00:08:48.004 6175.508 - 6200.714: 0.4473% ( 14) 00:08:48.004 6200.714 - 6225.920: 0.5809% ( 23) 00:08:48.004 6225.920 - 6251.126: 0.7609% ( 31) 00:08:48.004 6251.126 - 6276.332: 1.3592% ( 103) 00:08:48.004 6276.332 - 6301.538: 1.9342% ( 99) 00:08:48.004 6301.538 - 6326.745: 2.1608% ( 39) 00:08:48.004 6326.745 - 6351.951: 2.3641% ( 35) 00:08:48.004 6351.951 - 6377.157: 2.5616% ( 34) 00:08:48.004 6377.157 - 6402.363: 2.9159% ( 61) 00:08:48.004 6402.363 - 6427.569: 3.3051% ( 67) 00:08:48.004 6427.569 - 6452.775: 3.7872% ( 83) 00:08:48.004 6452.775 - 6503.188: 4.8676% ( 186) 00:08:48.004 6503.188 - 6553.600: 6.5520% ( 290) 00:08:48.004 6553.600 - 6604.012: 8.6954% ( 369) 00:08:48.004 6604.012 - 6654.425: 11.7333% ( 523) 00:08:48.004 6654.425 - 6704.837: 15.2242% ( 601) 00:08:48.004 6704.837 - 6755.249: 18.2969% ( 529) 00:08:48.004 6755.249 - 6805.662: 22.2526% ( 681) 00:08:48.004 6805.662 - 6856.074: 26.4521% ( 723) 00:08:48.004 6856.074 - 6906.486: 30.8899% ( 764) 00:08:48.004 6906.486 - 6956.898: 35.3915% ( 775) 00:08:48.004 6956.898 - 7007.311: 40.4217% ( 866) 00:08:48.004 7007.311 - 7057.723: 44.9640% ( 782) 00:08:48.004 7057.723 - 7108.135: 49.2275% ( 734) 00:08:48.004 7108.135 - 7158.548: 52.4919% ( 562) 00:08:48.004 7158.548 - 7208.960: 55.4426% ( 508) 00:08:48.004 7208.960 - 7259.372: 58.5444% ( 534) 00:08:48.004 7259.372 - 7309.785: 61.6171% ( 529) 00:08:48.004 7309.785 - 7360.197: 64.1322% ( 433) 00:08:48.004 7360.197 - 7410.609: 66.0723% ( 334) 00:08:48.004 7410.609 - 7461.022: 68.1750% ( 362) 00:08:48.004 7461.022 - 7511.434: 69.8362% ( 286) 00:08:48.004 7511.434 - 7561.846: 72.1306% ( 395) 00:08:48.004 7561.846 - 7612.258: 73.7105% ( 272) 00:08:48.004 7612.258 - 7662.671: 75.1278% ( 244) 00:08:48.004 7662.671 - 7713.083: 76.3824% ( 216) 00:08:48.004 7713.083 - 7763.495: 77.5093% ( 194) 00:08:48.004 7763.495 - 7813.908: 78.5374% ( 177) 00:08:48.004 7813.908 - 7864.320: 79.7921% ( 216) 00:08:48.004 7864.320 - 7914.732: 80.9131% ( 193) 00:08:48.004 7914.732 - 7965.145: 82.3594% ( 249) 00:08:48.004 7965.145 - 8015.557: 84.0091% ( 284) 00:08:48.004 8015.557 - 8065.969: 85.5483% ( 265) 00:08:48.004 8065.969 - 8116.382: 86.8088% ( 217) 00:08:48.004 8116.382 - 8166.794: 87.7962% ( 170) 00:08:48.004 8166.794 - 8217.206: 89.1090% ( 226) 00:08:48.004 8217.206 - 8267.618: 90.5843% ( 254) 00:08:48.004 8267.618 - 8318.031: 91.5079% ( 159) 00:08:48.004 8318.031 - 8368.443: 92.3153% ( 139) 00:08:48.004 8368.443 - 8418.855: 93.1285% ( 140) 00:08:48.004 8418.855 - 8469.268: 93.8778% ( 129) 00:08:48.004 8469.268 - 8519.680: 94.6445% ( 132) 00:08:48.004 8519.680 - 8570.092: 95.1324% ( 84) 00:08:48.004 8570.092 - 8620.505: 95.5623% ( 74) 00:08:48.004 8620.505 - 8670.917: 95.9572% ( 68) 00:08:48.004 8670.917 - 8721.329: 96.2941% ( 58) 00:08:48.004 8721.329 - 8771.742: 96.4568% ( 28) 00:08:48.004 8771.742 - 8822.154: 96.6020% ( 25) 00:08:48.004 8822.154 - 8872.566: 96.7124% ( 19) 00:08:48.004 8872.566 - 8922.978: 96.8401% ( 22) 00:08:48.004 8922.978 - 8973.391: 96.9389% ( 17) 00:08:48.004 8973.391 - 9023.803: 97.0493% ( 19) 00:08:48.004 9023.803 - 9074.215: 97.1422% ( 16) 00:08:48.004 9074.215 - 9124.628: 97.2584% ( 20) 00:08:48.004 9124.628 - 9175.040: 97.3862% ( 22) 00:08:48.004 9175.040 - 9225.452: 97.6998% ( 54) 00:08:48.004 9225.452 - 9275.865: 97.8334% ( 23) 00:08:48.004 9275.865 - 9326.277: 97.9438% ( 19) 00:08:48.004 9326.277 - 9376.689: 98.0541% ( 19) 00:08:48.004 9376.689 - 9427.102: 98.1819% ( 22) 00:08:48.004 9427.102 - 9477.514: 98.2807% ( 17) 00:08:48.004 9477.514 - 9527.926: 98.3794% ( 17) 00:08:48.005 9527.926 - 9578.338: 98.4607% ( 14) 00:08:48.005 9578.338 - 9628.751: 98.6524% ( 33) 00:08:48.005 9628.751 - 9679.163: 98.7686% ( 20) 00:08:48.005 9679.163 - 9729.575: 98.8209% ( 9) 00:08:48.005 9729.575 - 9779.988: 98.8615% ( 7) 00:08:48.005 9779.988 - 9830.400: 98.9022% ( 7) 00:08:48.005 9830.400 - 9880.812: 98.9603% ( 10) 00:08:48.005 9880.812 - 9931.225: 99.0009% ( 7) 00:08:48.005 9931.225 - 9981.637: 99.0822% ( 14) 00:08:48.005 9981.637 - 10032.049: 99.1520% ( 12) 00:08:48.005 10032.049 - 10082.462: 99.2100% ( 10) 00:08:48.005 10082.462 - 10132.874: 99.2449% ( 6) 00:08:48.005 10132.874 - 10183.286: 99.2565% ( 2) 00:08:48.005 23794.609 - 23895.434: 99.2681% ( 2) 00:08:48.005 23895.434 - 23996.258: 99.2972% ( 5) 00:08:48.005 23996.258 - 24097.083: 99.3204% ( 4) 00:08:48.005 24097.083 - 24197.908: 99.3436% ( 4) 00:08:48.005 24197.908 - 24298.732: 99.3669% ( 4) 00:08:48.005 24298.732 - 24399.557: 99.3901% ( 4) 00:08:48.005 24399.557 - 24500.382: 99.4133% ( 4) 00:08:48.005 24500.382 - 24601.206: 99.4366% ( 4) 00:08:48.005 24601.206 - 24702.031: 99.4598% ( 4) 00:08:48.005 24702.031 - 24802.855: 99.4830% ( 4) 00:08:48.005 24802.855 - 24903.680: 99.5063% ( 4) 00:08:48.005 24903.680 - 25004.505: 99.5295% ( 4) 00:08:48.005 25004.505 - 25105.329: 99.5527% ( 4) 00:08:48.005 25105.329 - 25206.154: 99.5760% ( 4) 00:08:48.005 25206.154 - 25306.978: 99.5992% ( 4) 00:08:48.005 25306.978 - 25407.803: 99.6224% ( 4) 00:08:48.005 25407.803 - 25508.628: 99.6283% ( 1) 00:08:48.005 28432.542 - 28634.191: 99.6689% ( 7) 00:08:48.005 28634.191 - 28835.840: 99.7154% ( 8) 00:08:48.005 28835.840 - 29037.489: 99.7560% ( 7) 00:08:48.005 29037.489 - 29239.138: 99.8025% ( 8) 00:08:48.005 29239.138 - 29440.788: 99.8490% ( 8) 00:08:48.005 29440.788 - 29642.437: 99.9013% ( 9) 00:08:48.005 29642.437 - 29844.086: 99.9477% ( 8) 00:08:48.005 29844.086 - 30045.735: 99.9942% ( 8) 00:08:48.005 30045.735 - 30247.385: 100.0000% ( 1) 00:08:48.005 00:08:48.005 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:48.005 ============================================================================== 00:08:48.005 Range in us Cumulative IO count 00:08:48.005 5898.240 - 5923.446: 0.0058% ( 1) 00:08:48.005 5948.652 - 5973.858: 0.0116% ( 1) 00:08:48.005 5999.065 - 6024.271: 0.0232% ( 2) 00:08:48.005 6024.271 - 6049.477: 0.0290% ( 1) 00:08:48.005 6049.477 - 6074.683: 0.0407% ( 2) 00:08:48.005 6074.683 - 6099.889: 0.0697% ( 5) 00:08:48.005 6099.889 - 6125.095: 0.1046% ( 6) 00:08:48.005 6125.095 - 6150.302: 0.1743% ( 12) 00:08:48.005 6150.302 - 6175.508: 0.2730% ( 17) 00:08:48.005 6175.508 - 6200.714: 0.3543% ( 14) 00:08:48.005 6200.714 - 6225.920: 0.5925% ( 41) 00:08:48.005 6225.920 - 6251.126: 0.8364% ( 42) 00:08:48.005 6251.126 - 6276.332: 1.2082% ( 64) 00:08:48.005 6276.332 - 6301.538: 1.4579% ( 43) 00:08:48.005 6301.538 - 6326.745: 1.7600% ( 52) 00:08:48.005 6326.745 - 6351.951: 2.0388% ( 48) 00:08:48.005 6351.951 - 6377.157: 2.2886% ( 43) 00:08:48.005 6377.157 - 6402.363: 2.6661% ( 65) 00:08:48.005 6402.363 - 6427.569: 3.0263% ( 62) 00:08:48.005 6427.569 - 6452.775: 3.4329% ( 70) 00:08:48.005 6452.775 - 6503.188: 4.8153% ( 238) 00:08:48.005 6503.188 - 6553.600: 6.3604% ( 266) 00:08:48.005 6553.600 - 6604.012: 8.9277% ( 442) 00:08:48.005 6604.012 - 6654.425: 12.0179% ( 532) 00:08:48.005 6654.425 - 6704.837: 15.1371% ( 537) 00:08:48.005 6704.837 - 6755.249: 18.4712% ( 574) 00:08:48.005 6755.249 - 6805.662: 21.9331% ( 596) 00:08:48.005 6805.662 - 6856.074: 26.9981% ( 872) 00:08:48.005 6856.074 - 6906.486: 31.5404% ( 782) 00:08:48.005 6906.486 - 6956.898: 36.1640% ( 796) 00:08:48.005 6956.898 - 7007.311: 41.4382% ( 908) 00:08:48.005 7007.311 - 7057.723: 45.2370% ( 654) 00:08:48.005 7057.723 - 7108.135: 49.7328% ( 774) 00:08:48.005 7108.135 - 7158.548: 53.1482% ( 588) 00:08:48.005 7158.548 - 7208.960: 55.8666% ( 468) 00:08:48.005 7208.960 - 7259.372: 58.3120% ( 421) 00:08:48.005 7259.372 - 7309.785: 61.2976% ( 514) 00:08:48.005 7309.785 - 7360.197: 63.6617% ( 407) 00:08:48.005 7360.197 - 7410.609: 66.0606% ( 413) 00:08:48.005 7410.609 - 7461.022: 68.0530% ( 343) 00:08:48.005 7461.022 - 7511.434: 69.8362% ( 307) 00:08:48.005 7511.434 - 7561.846: 71.7007% ( 321) 00:08:48.005 7561.846 - 7612.258: 73.1703% ( 253) 00:08:48.005 7612.258 - 7662.671: 74.4308% ( 217) 00:08:48.005 7662.671 - 7713.083: 75.7203% ( 222) 00:08:48.005 7713.083 - 7763.495: 76.8529% ( 195) 00:08:48.005 7763.495 - 7813.908: 77.9391% ( 187) 00:08:48.005 7813.908 - 7864.320: 79.1473% ( 208) 00:08:48.005 7864.320 - 7914.732: 80.3671% ( 210) 00:08:48.005 7914.732 - 7965.145: 81.8657% ( 258) 00:08:48.005 7965.145 - 8015.557: 83.6838% ( 313) 00:08:48.005 8015.557 - 8065.969: 85.1069% ( 245) 00:08:48.005 8065.969 - 8116.382: 86.6403% ( 264) 00:08:48.005 8116.382 - 8166.794: 88.2435% ( 276) 00:08:48.005 8166.794 - 8217.206: 89.3355% ( 188) 00:08:48.005 8217.206 - 8267.618: 90.7876% ( 250) 00:08:48.005 8267.618 - 8318.031: 91.7286% ( 162) 00:08:48.005 8318.031 - 8368.443: 92.5186% ( 136) 00:08:48.005 8368.443 - 8418.855: 93.1633% ( 111) 00:08:48.005 8418.855 - 8469.268: 93.7558% ( 102) 00:08:48.005 8469.268 - 8519.680: 94.3309% ( 99) 00:08:48.005 8519.680 - 8570.092: 94.9117% ( 100) 00:08:48.005 8570.092 - 8620.505: 95.3648% ( 78) 00:08:48.005 8620.505 - 8670.917: 95.7423% ( 65) 00:08:48.005 8670.917 - 8721.329: 96.0270% ( 49) 00:08:48.005 8721.329 - 8771.742: 96.3755% ( 60) 00:08:48.005 8771.742 - 8822.154: 96.6485% ( 47) 00:08:48.005 8822.154 - 8872.566: 96.8343% ( 32) 00:08:48.005 8872.566 - 8922.978: 97.0841% ( 43) 00:08:48.005 8922.978 - 8973.391: 97.3223% ( 41) 00:08:48.005 8973.391 - 9023.803: 97.4326% ( 19) 00:08:48.005 9023.803 - 9074.215: 97.5604% ( 22) 00:08:48.005 9074.215 - 9124.628: 97.6708% ( 19) 00:08:48.005 9124.628 - 9175.040: 97.7695% ( 17) 00:08:48.005 9175.040 - 9225.452: 97.9322% ( 28) 00:08:48.005 9225.452 - 9275.865: 98.0890% ( 27) 00:08:48.005 9275.865 - 9326.277: 98.3388% ( 43) 00:08:48.005 9326.277 - 9376.689: 98.3910% ( 9) 00:08:48.005 9376.689 - 9427.102: 98.4433% ( 9) 00:08:48.005 9427.102 - 9477.514: 98.5246% ( 14) 00:08:48.005 9477.514 - 9527.926: 98.6059% ( 14) 00:08:48.005 9527.926 - 9578.338: 98.6873% ( 14) 00:08:48.005 9578.338 - 9628.751: 98.7686% ( 14) 00:08:48.005 9628.751 - 9679.163: 98.9777% ( 36) 00:08:48.005 9679.163 - 9729.575: 99.1403% ( 28) 00:08:48.005 9729.575 - 9779.988: 99.1752% ( 6) 00:08:48.005 9779.988 - 9830.400: 99.2042% ( 5) 00:08:48.005 9830.400 - 9880.812: 99.2333% ( 5) 00:08:48.005 9880.812 - 9931.225: 99.2507% ( 3) 00:08:48.005 9931.225 - 9981.637: 99.2565% ( 1) 00:08:48.005 22383.065 - 22483.889: 99.2797% ( 4) 00:08:48.005 22483.889 - 22584.714: 99.3030% ( 4) 00:08:48.005 22584.714 - 22685.538: 99.3204% ( 3) 00:08:48.005 22685.538 - 22786.363: 99.3436% ( 4) 00:08:48.005 22786.363 - 22887.188: 99.3669% ( 4) 00:08:48.005 22887.188 - 22988.012: 99.3901% ( 4) 00:08:48.005 22988.012 - 23088.837: 99.4133% ( 4) 00:08:48.005 23088.837 - 23189.662: 99.4366% ( 4) 00:08:48.005 23189.662 - 23290.486: 99.4598% ( 4) 00:08:48.005 23290.486 - 23391.311: 99.4830% ( 4) 00:08:48.005 23391.311 - 23492.135: 99.5063% ( 4) 00:08:48.005 23492.135 - 23592.960: 99.5353% ( 5) 00:08:48.005 23592.960 - 23693.785: 99.5586% ( 4) 00:08:48.005 23693.785 - 23794.609: 99.5818% ( 4) 00:08:48.005 23794.609 - 23895.434: 99.6050% ( 4) 00:08:48.005 23895.434 - 23996.258: 99.6283% ( 4) 00:08:48.005 26819.348 - 27020.997: 99.6399% ( 2) 00:08:48.005 27020.997 - 27222.646: 99.6863% ( 8) 00:08:48.005 27222.646 - 27424.295: 99.7270% ( 7) 00:08:48.005 27424.295 - 27625.945: 99.7793% ( 9) 00:08:48.005 27625.945 - 27827.594: 99.8199% ( 7) 00:08:48.005 27827.594 - 28029.243: 99.8664% ( 8) 00:08:48.005 28029.243 - 28230.892: 99.9129% ( 8) 00:08:48.005 28230.892 - 28432.542: 99.9593% ( 8) 00:08:48.005 28432.542 - 28634.191: 100.0000% ( 7) 00:08:48.005 00:08:48.005 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:48.005 ============================================================================== 00:08:48.005 Range in us Cumulative IO count 00:08:48.005 5873.034 - 5898.240: 0.0058% ( 1) 00:08:48.005 5999.065 - 6024.271: 0.0174% ( 2) 00:08:48.005 6024.271 - 6049.477: 0.0290% ( 2) 00:08:48.005 6049.477 - 6074.683: 0.0407% ( 2) 00:08:48.005 6074.683 - 6099.889: 0.0639% ( 4) 00:08:48.005 6099.889 - 6125.095: 0.1104% ( 8) 00:08:48.005 6125.095 - 6150.302: 0.1568% ( 8) 00:08:48.005 6150.302 - 6175.508: 0.1917% ( 6) 00:08:48.005 6175.508 - 6200.714: 0.2846% ( 16) 00:08:48.005 6200.714 - 6225.920: 0.4647% ( 31) 00:08:48.005 6225.920 - 6251.126: 0.6912% ( 39) 00:08:48.005 6251.126 - 6276.332: 0.9352% ( 42) 00:08:48.005 6276.332 - 6301.538: 1.1849% ( 43) 00:08:48.005 6301.538 - 6326.745: 1.4696% ( 49) 00:08:48.005 6326.745 - 6351.951: 1.8065% ( 58) 00:08:48.005 6351.951 - 6377.157: 2.1317% ( 56) 00:08:48.005 6377.157 - 6402.363: 2.6719% ( 93) 00:08:48.005 6402.363 - 6427.569: 3.1947% ( 90) 00:08:48.005 6427.569 - 6452.775: 3.7407% ( 94) 00:08:48.005 6452.775 - 6503.188: 5.1987% ( 251) 00:08:48.005 6503.188 - 6553.600: 7.2491% ( 353) 00:08:48.005 6553.600 - 6604.012: 9.3053% ( 354) 00:08:48.005 6604.012 - 6654.425: 11.4835% ( 375) 00:08:48.005 6654.425 - 6704.837: 14.4691% ( 514) 00:08:48.005 6704.837 - 6755.249: 17.3850% ( 502) 00:08:48.005 6755.249 - 6805.662: 21.0967% ( 639) 00:08:48.005 6805.662 - 6856.074: 25.0116% ( 674) 00:08:48.006 6856.074 - 6906.486: 30.1115% ( 878) 00:08:48.006 6906.486 - 6956.898: 34.6945% ( 789) 00:08:48.006 6956.898 - 7007.311: 39.6317% ( 850) 00:08:48.006 7007.311 - 7057.723: 45.0046% ( 925) 00:08:48.006 7057.723 - 7108.135: 49.4888% ( 772) 00:08:48.006 7108.135 - 7158.548: 53.3167% ( 659) 00:08:48.006 7158.548 - 7208.960: 56.7960% ( 599) 00:08:48.006 7208.960 - 7259.372: 59.2356% ( 420) 00:08:48.006 7259.372 - 7309.785: 61.8611% ( 452) 00:08:48.006 7309.785 - 7360.197: 64.0974% ( 385) 00:08:48.006 7360.197 - 7410.609: 66.0258% ( 332) 00:08:48.006 7410.609 - 7461.022: 68.3841% ( 406) 00:08:48.006 7461.022 - 7511.434: 69.8885% ( 259) 00:08:48.006 7511.434 - 7561.846: 71.0850% ( 206) 00:08:48.006 7561.846 - 7612.258: 72.4907% ( 242) 00:08:48.006 7612.258 - 7662.671: 74.4250% ( 333) 00:08:48.006 7662.671 - 7713.083: 75.8655% ( 248) 00:08:48.006 7713.083 - 7763.495: 77.1201% ( 216) 00:08:48.006 7763.495 - 7813.908: 78.1424% ( 176) 00:08:48.006 7813.908 - 7864.320: 79.2751% ( 195) 00:08:48.006 7864.320 - 7914.732: 80.7447% ( 253) 00:08:48.006 7914.732 - 7965.145: 82.4059% ( 286) 00:08:48.006 7965.145 - 8015.557: 83.7186% ( 226) 00:08:48.006 8015.557 - 8065.969: 85.1940% ( 254) 00:08:48.006 8065.969 - 8116.382: 86.8262% ( 281) 00:08:48.006 8116.382 - 8166.794: 88.3190% ( 257) 00:08:48.006 8166.794 - 8217.206: 89.7711% ( 250) 00:08:48.006 8217.206 - 8267.618: 91.1303% ( 234) 00:08:48.006 8267.618 - 8318.031: 92.0713% ( 162) 00:08:48.006 8318.031 - 8368.443: 92.7393% ( 115) 00:08:48.006 8368.443 - 8418.855: 93.2911% ( 95) 00:08:48.006 8418.855 - 8469.268: 93.7616% ( 81) 00:08:48.006 8469.268 - 8519.680: 94.1973% ( 75) 00:08:48.006 8519.680 - 8570.092: 94.5922% ( 68) 00:08:48.006 8570.092 - 8620.505: 95.1557% ( 97) 00:08:48.006 8620.505 - 8670.917: 95.5681% ( 71) 00:08:48.006 8670.917 - 8721.329: 95.8875% ( 55) 00:08:48.006 8721.329 - 8771.742: 96.3348% ( 77) 00:08:48.006 8771.742 - 8822.154: 96.6833% ( 60) 00:08:48.006 8822.154 - 8872.566: 97.0957% ( 71) 00:08:48.006 8872.566 - 8922.978: 97.3397% ( 42) 00:08:48.006 8922.978 - 8973.391: 97.4617% ( 21) 00:08:48.006 8973.391 - 9023.803: 97.5430% ( 14) 00:08:48.006 9023.803 - 9074.215: 97.6359% ( 16) 00:08:48.006 9074.215 - 9124.628: 97.7289% ( 16) 00:08:48.006 9124.628 - 9175.040: 97.8276% ( 17) 00:08:48.006 9175.040 - 9225.452: 97.8973% ( 12) 00:08:48.006 9225.452 - 9275.865: 98.1355% ( 41) 00:08:48.006 9275.865 - 9326.277: 98.3562% ( 38) 00:08:48.006 9326.277 - 9376.689: 98.4026% ( 8) 00:08:48.006 9376.689 - 9427.102: 98.4549% ( 9) 00:08:48.006 9427.102 - 9477.514: 98.5653% ( 19) 00:08:48.006 9477.514 - 9527.926: 98.6757% ( 19) 00:08:48.006 9527.926 - 9578.338: 98.7454% ( 12) 00:08:48.006 9578.338 - 9628.751: 99.0358% ( 50) 00:08:48.006 9628.751 - 9679.163: 99.0881% ( 9) 00:08:48.006 9679.163 - 9729.575: 99.1578% ( 12) 00:08:48.006 9729.575 - 9779.988: 99.2100% ( 9) 00:08:48.006 9779.988 - 9830.400: 99.2507% ( 7) 00:08:48.006 9830.400 - 9880.812: 99.2565% ( 1) 00:08:48.006 20568.222 - 20669.046: 99.2797% ( 4) 00:08:48.006 20669.046 - 20769.871: 99.3030% ( 4) 00:08:48.006 20769.871 - 20870.695: 99.3262% ( 4) 00:08:48.006 20870.695 - 20971.520: 99.3494% ( 4) 00:08:48.006 20971.520 - 21072.345: 99.3727% ( 4) 00:08:48.006 21072.345 - 21173.169: 99.3959% ( 4) 00:08:48.006 21173.169 - 21273.994: 99.4191% ( 4) 00:08:48.006 21273.994 - 21374.818: 99.4424% ( 4) 00:08:48.006 21374.818 - 21475.643: 99.4656% ( 4) 00:08:48.006 21475.643 - 21576.468: 99.4888% ( 4) 00:08:48.006 21576.468 - 21677.292: 99.5121% ( 4) 00:08:48.006 21677.292 - 21778.117: 99.5353% ( 4) 00:08:48.006 21778.117 - 21878.942: 99.5586% ( 4) 00:08:48.006 21878.942 - 21979.766: 99.5760% ( 3) 00:08:48.006 21979.766 - 22080.591: 99.5992% ( 4) 00:08:48.006 22080.591 - 22181.415: 99.6224% ( 4) 00:08:48.006 22181.415 - 22282.240: 99.6283% ( 1) 00:08:48.006 25206.154 - 25306.978: 99.6399% ( 2) 00:08:48.006 25306.978 - 25407.803: 99.6631% ( 4) 00:08:48.006 25407.803 - 25508.628: 99.6863% ( 4) 00:08:48.006 25508.628 - 25609.452: 99.7096% ( 4) 00:08:48.006 25609.452 - 25710.277: 99.7328% ( 4) 00:08:48.006 25710.277 - 25811.102: 99.7560% ( 4) 00:08:48.006 25811.102 - 26012.751: 99.7967% ( 7) 00:08:48.006 26012.751 - 26214.400: 99.8432% ( 8) 00:08:48.006 26214.400 - 26416.049: 99.8896% ( 8) 00:08:48.006 26416.049 - 26617.698: 99.9303% ( 7) 00:08:48.006 26617.698 - 26819.348: 99.9768% ( 8) 00:08:48.006 26819.348 - 27020.997: 100.0000% ( 4) 00:08:48.006 00:08:48.006 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:48.006 ============================================================================== 00:08:48.006 Range in us Cumulative IO count 00:08:48.006 6049.477 - 6074.683: 0.0058% ( 1) 00:08:48.006 6074.683 - 6099.889: 0.0349% ( 5) 00:08:48.006 6099.889 - 6125.095: 0.1046% ( 12) 00:08:48.006 6125.095 - 6150.302: 0.1743% ( 12) 00:08:48.006 6150.302 - 6175.508: 0.2382% ( 11) 00:08:48.006 6175.508 - 6200.714: 0.3369% ( 17) 00:08:48.006 6200.714 - 6225.920: 0.5228% ( 32) 00:08:48.006 6225.920 - 6251.126: 0.8074% ( 49) 00:08:48.006 6251.126 - 6276.332: 0.9061% ( 17) 00:08:48.006 6276.332 - 6301.538: 1.0746% ( 29) 00:08:48.006 6301.538 - 6326.745: 1.4405% ( 63) 00:08:48.006 6326.745 - 6351.951: 1.6787% ( 41) 00:08:48.006 6351.951 - 6377.157: 1.9575% ( 48) 00:08:48.006 6377.157 - 6402.363: 2.4280% ( 81) 00:08:48.006 6402.363 - 6427.569: 2.8927% ( 80) 00:08:48.006 6427.569 - 6452.775: 3.3457% ( 78) 00:08:48.006 6452.775 - 6503.188: 4.9024% ( 268) 00:08:48.006 6503.188 - 6553.600: 6.6276% ( 297) 00:08:48.006 6553.600 - 6604.012: 8.7651% ( 368) 00:08:48.006 6604.012 - 6654.425: 11.0711% ( 397) 00:08:48.006 6654.425 - 6704.837: 14.5969% ( 607) 00:08:48.006 6704.837 - 6755.249: 17.6115% ( 519) 00:08:48.006 6755.249 - 6805.662: 20.9340% ( 572) 00:08:48.006 6805.662 - 6856.074: 25.5692% ( 798) 00:08:48.006 6856.074 - 6906.486: 30.5355% ( 855) 00:08:48.006 6906.486 - 6956.898: 34.6712% ( 712) 00:08:48.006 6956.898 - 7007.311: 39.9686% ( 912) 00:08:48.006 7007.311 - 7057.723: 45.2428% ( 908) 00:08:48.006 7057.723 - 7108.135: 49.0300% ( 652) 00:08:48.006 7108.135 - 7158.548: 53.4619% ( 763) 00:08:48.006 7158.548 - 7208.960: 57.1736% ( 639) 00:08:48.006 7208.960 - 7259.372: 59.8629% ( 463) 00:08:48.006 7259.372 - 7309.785: 62.0702% ( 380) 00:08:48.006 7309.785 - 7360.197: 63.9057% ( 316) 00:08:48.006 7360.197 - 7410.609: 65.8225% ( 330) 00:08:48.006 7410.609 - 7461.022: 67.5244% ( 293) 00:08:48.006 7461.022 - 7511.434: 69.4238% ( 327) 00:08:48.006 7511.434 - 7561.846: 71.4394% ( 347) 00:08:48.006 7561.846 - 7612.258: 72.8799% ( 248) 00:08:48.006 7612.258 - 7662.671: 74.2914% ( 243) 00:08:48.006 7662.671 - 7713.083: 75.5518% ( 217) 00:08:48.006 7713.083 - 7763.495: 76.8239% ( 219) 00:08:48.006 7763.495 - 7813.908: 78.0553% ( 212) 00:08:48.006 7813.908 - 7864.320: 79.5888% ( 264) 00:08:48.006 7864.320 - 7914.732: 81.0816% ( 257) 00:08:48.006 7914.732 - 7965.145: 82.6499% ( 270) 00:08:48.006 7965.145 - 8015.557: 84.1368% ( 256) 00:08:48.006 8015.557 - 8065.969: 85.4031% ( 218) 00:08:48.006 8065.969 - 8116.382: 87.0121% ( 277) 00:08:48.006 8116.382 - 8166.794: 88.7953% ( 307) 00:08:48.006 8166.794 - 8217.206: 89.9512% ( 199) 00:08:48.006 8217.206 - 8267.618: 91.4092% ( 251) 00:08:48.006 8267.618 - 8318.031: 92.5651% ( 199) 00:08:48.006 8318.031 - 8368.443: 93.1053% ( 93) 00:08:48.006 8368.443 - 8418.855: 93.5641% ( 79) 00:08:48.006 8418.855 - 8469.268: 93.9301% ( 63) 00:08:48.006 8469.268 - 8519.680: 94.3192% ( 67) 00:08:48.006 8519.680 - 8570.092: 94.6329% ( 54) 00:08:48.006 8570.092 - 8620.505: 94.9814% ( 60) 00:08:48.006 8620.505 - 8670.917: 95.3880% ( 70) 00:08:48.006 8670.917 - 8721.329: 95.8004% ( 71) 00:08:48.006 8721.329 - 8771.742: 95.9863% ( 32) 00:08:48.006 8771.742 - 8822.154: 96.1664% ( 31) 00:08:48.006 8822.154 - 8872.566: 96.4103% ( 42) 00:08:48.006 8872.566 - 8922.978: 96.6252% ( 37) 00:08:48.006 8922.978 - 8973.391: 96.8169% ( 33) 00:08:48.006 8973.391 - 9023.803: 97.1248% ( 53) 00:08:48.006 9023.803 - 9074.215: 97.4268% ( 52) 00:08:48.006 9074.215 - 9124.628: 97.6011% ( 30) 00:08:48.006 9124.628 - 9175.040: 97.7811% ( 31) 00:08:48.006 9175.040 - 9225.452: 97.8973% ( 20) 00:08:48.006 9225.452 - 9275.865: 98.1296% ( 40) 00:08:48.006 9275.865 - 9326.277: 98.4375% ( 53) 00:08:48.006 9326.277 - 9376.689: 98.6350% ( 34) 00:08:48.006 9376.689 - 9427.102: 98.7279% ( 16) 00:08:48.006 9427.102 - 9477.514: 98.7744% ( 8) 00:08:48.006 9477.514 - 9527.926: 98.8441% ( 12) 00:08:48.006 9527.926 - 9578.338: 98.8964% ( 9) 00:08:48.006 9578.338 - 9628.751: 98.9428% ( 8) 00:08:48.006 9628.751 - 9679.163: 99.1578% ( 37) 00:08:48.006 9679.163 - 9729.575: 99.1926% ( 6) 00:08:48.006 9729.575 - 9779.988: 99.2100% ( 3) 00:08:48.006 9779.988 - 9830.400: 99.2333% ( 4) 00:08:48.006 9830.400 - 9880.812: 99.2565% ( 4) 00:08:48.006 18854.203 - 18955.028: 99.2681% ( 2) 00:08:48.006 18955.028 - 19055.852: 99.2914% ( 4) 00:08:48.006 19055.852 - 19156.677: 99.3146% ( 4) 00:08:48.006 19156.677 - 19257.502: 99.3378% ( 4) 00:08:48.006 19257.502 - 19358.326: 99.3553% ( 3) 00:08:48.006 19358.326 - 19459.151: 99.3785% ( 4) 00:08:48.006 19459.151 - 19559.975: 99.4017% ( 4) 00:08:48.006 19559.975 - 19660.800: 99.4250% ( 4) 00:08:48.006 19660.800 - 19761.625: 99.4482% ( 4) 00:08:48.006 19761.625 - 19862.449: 99.4714% ( 4) 00:08:48.006 19862.449 - 19963.274: 99.4947% ( 4) 00:08:48.006 19963.274 - 20064.098: 99.5179% ( 4) 00:08:48.007 20064.098 - 20164.923: 99.5411% ( 4) 00:08:48.007 20164.923 - 20265.748: 99.5644% ( 4) 00:08:48.007 20265.748 - 20366.572: 99.5876% ( 4) 00:08:48.007 20366.572 - 20467.397: 99.6108% ( 4) 00:08:48.007 20467.397 - 20568.222: 99.6283% ( 3) 00:08:48.007 23492.135 - 23592.960: 99.6457% ( 3) 00:08:48.007 23592.960 - 23693.785: 99.6689% ( 4) 00:08:48.007 23693.785 - 23794.609: 99.6921% ( 4) 00:08:48.007 23794.609 - 23895.434: 99.7154% ( 4) 00:08:48.007 23895.434 - 23996.258: 99.7328% ( 3) 00:08:48.007 23996.258 - 24097.083: 99.7618% ( 5) 00:08:48.007 24097.083 - 24197.908: 99.7851% ( 4) 00:08:48.007 24197.908 - 24298.732: 99.8083% ( 4) 00:08:48.007 24298.732 - 24399.557: 99.8316% ( 4) 00:08:48.007 24399.557 - 24500.382: 99.8548% ( 4) 00:08:48.007 24500.382 - 24601.206: 99.8838% ( 5) 00:08:48.007 24601.206 - 24702.031: 99.9013% ( 3) 00:08:48.007 24702.031 - 24802.855: 99.9187% ( 3) 00:08:48.007 24802.855 - 24903.680: 99.9361% ( 3) 00:08:48.007 24903.680 - 25004.505: 99.9535% ( 3) 00:08:48.007 25004.505 - 25105.329: 99.9710% ( 3) 00:08:48.007 25105.329 - 25206.154: 99.9942% ( 4) 00:08:48.007 25206.154 - 25306.978: 100.0000% ( 1) 00:08:48.007 00:08:48.007 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:48.007 ============================================================================== 00:08:48.007 Range in us Cumulative IO count 00:08:48.007 5948.652 - 5973.858: 0.0116% ( 2) 00:08:48.007 5973.858 - 5999.065: 0.0174% ( 1) 00:08:48.007 5999.065 - 6024.271: 0.0405% ( 4) 00:08:48.007 6024.271 - 6049.477: 0.0521% ( 2) 00:08:48.007 6049.477 - 6074.683: 0.0637% ( 2) 00:08:48.007 6074.683 - 6099.889: 0.0810% ( 3) 00:08:48.007 6099.889 - 6125.095: 0.1100% ( 5) 00:08:48.007 6125.095 - 6150.302: 0.1678% ( 10) 00:08:48.007 6150.302 - 6175.508: 0.2488% ( 14) 00:08:48.007 6175.508 - 6200.714: 0.3762% ( 22) 00:08:48.007 6200.714 - 6225.920: 0.4398% ( 11) 00:08:48.007 6225.920 - 6251.126: 0.5324% ( 16) 00:08:48.007 6251.126 - 6276.332: 0.7523% ( 38) 00:08:48.007 6276.332 - 6301.538: 1.0648% ( 54) 00:08:48.007 6301.538 - 6326.745: 1.4699% ( 70) 00:08:48.007 6326.745 - 6351.951: 1.7072% ( 41) 00:08:48.007 6351.951 - 6377.157: 1.9965% ( 50) 00:08:48.007 6377.157 - 6402.363: 2.3090% ( 54) 00:08:48.007 6402.363 - 6427.569: 2.7778% ( 81) 00:08:48.007 6427.569 - 6452.775: 3.2986% ( 90) 00:08:48.007 6452.775 - 6503.188: 4.9826% ( 291) 00:08:48.007 6503.188 - 6553.600: 6.6088% ( 281) 00:08:48.007 6553.600 - 6604.012: 8.9120% ( 398) 00:08:48.007 6604.012 - 6654.425: 11.5278% ( 452) 00:08:48.007 6654.425 - 6704.837: 14.3113% ( 481) 00:08:48.007 6704.837 - 6755.249: 17.5637% ( 562) 00:08:48.007 6755.249 - 6805.662: 21.0822% ( 608) 00:08:48.007 6805.662 - 6856.074: 25.0579% ( 687) 00:08:48.007 6856.074 - 6906.486: 30.1678% ( 883) 00:08:48.007 6906.486 - 6956.898: 33.9352% ( 651) 00:08:48.007 6956.898 - 7007.311: 38.8426% ( 848) 00:08:48.007 7007.311 - 7057.723: 43.9236% ( 878) 00:08:48.007 7057.723 - 7108.135: 48.8889% ( 858) 00:08:48.007 7108.135 - 7158.548: 52.7894% ( 674) 00:08:48.007 7158.548 - 7208.960: 55.8507% ( 529) 00:08:48.007 7208.960 - 7259.372: 59.1898% ( 577) 00:08:48.007 7259.372 - 7309.785: 61.7824% ( 448) 00:08:48.007 7309.785 - 7360.197: 64.2650% ( 429) 00:08:48.007 7360.197 - 7410.609: 66.4757% ( 382) 00:08:48.007 7410.609 - 7461.022: 67.8414% ( 236) 00:08:48.007 7461.022 - 7511.434: 69.8553% ( 348) 00:08:48.007 7511.434 - 7561.846: 71.2674% ( 244) 00:08:48.007 7561.846 - 7612.258: 72.6042% ( 231) 00:08:48.007 7612.258 - 7662.671: 74.0046% ( 242) 00:08:48.007 7662.671 - 7713.083: 75.6771% ( 289) 00:08:48.007 7713.083 - 7763.495: 77.1123% ( 248) 00:08:48.007 7763.495 - 7813.908: 78.5590% ( 250) 00:08:48.007 7813.908 - 7864.320: 80.0174% ( 252) 00:08:48.007 7864.320 - 7914.732: 81.6088% ( 275) 00:08:48.007 7914.732 - 7965.145: 83.2002% ( 275) 00:08:48.007 7965.145 - 8015.557: 84.5602% ( 235) 00:08:48.007 8015.557 - 8065.969: 85.9606% ( 242) 00:08:48.007 8065.969 - 8116.382: 87.5752% ( 279) 00:08:48.007 8116.382 - 8166.794: 88.8773% ( 225) 00:08:48.007 8166.794 - 8217.206: 89.8322% ( 165) 00:08:48.007 8217.206 - 8267.618: 90.9375% ( 191) 00:08:48.007 8267.618 - 8318.031: 92.0891% ( 199) 00:08:48.007 8318.031 - 8368.443: 92.9398% ( 147) 00:08:48.007 8368.443 - 8418.855: 93.4491% ( 88) 00:08:48.007 8418.855 - 8469.268: 93.9062% ( 79) 00:08:48.007 8469.268 - 8519.680: 94.2998% ( 68) 00:08:48.007 8519.680 - 8570.092: 94.7222% ( 73) 00:08:48.007 8570.092 - 8620.505: 95.1042% ( 66) 00:08:48.007 8620.505 - 8670.917: 95.3125% ( 36) 00:08:48.007 8670.917 - 8721.329: 95.5093% ( 34) 00:08:48.007 8721.329 - 8771.742: 95.6481% ( 24) 00:08:48.007 8771.742 - 8822.154: 96.0417% ( 68) 00:08:48.007 8822.154 - 8872.566: 96.2674% ( 39) 00:08:48.007 8872.566 - 8922.978: 96.4120% ( 25) 00:08:48.007 8922.978 - 8973.391: 96.6030% ( 33) 00:08:48.007 8973.391 - 9023.803: 96.7998% ( 34) 00:08:48.007 9023.803 - 9074.215: 97.0081% ( 36) 00:08:48.007 9074.215 - 9124.628: 97.2569% ( 43) 00:08:48.007 9124.628 - 9175.040: 97.5694% ( 54) 00:08:48.007 9175.040 - 9225.452: 97.8356% ( 46) 00:08:48.007 9225.452 - 9275.865: 98.4201% ( 101) 00:08:48.007 9275.865 - 9326.277: 98.6458% ( 39) 00:08:48.007 9326.277 - 9376.689: 98.8773% ( 40) 00:08:48.007 9376.689 - 9427.102: 99.0625% ( 32) 00:08:48.007 9427.102 - 9477.514: 99.1319% ( 12) 00:08:48.007 9477.514 - 9527.926: 99.1725% ( 7) 00:08:48.007 9527.926 - 9578.338: 99.1956% ( 4) 00:08:48.007 9578.338 - 9628.751: 99.2130% ( 3) 00:08:48.007 9628.751 - 9679.163: 99.2245% ( 2) 00:08:48.007 9679.163 - 9729.575: 99.2361% ( 2) 00:08:48.007 9729.575 - 9779.988: 99.2535% ( 3) 00:08:48.007 9779.988 - 9830.400: 99.2593% ( 1) 00:08:48.007 13308.849 - 13409.674: 99.2766% ( 3) 00:08:48.007 13409.674 - 13510.498: 99.2998% ( 4) 00:08:48.007 13510.498 - 13611.323: 99.3229% ( 4) 00:08:48.007 13611.323 - 13712.148: 99.3461% ( 4) 00:08:48.007 13712.148 - 13812.972: 99.3692% ( 4) 00:08:48.007 13812.972 - 13913.797: 99.3924% ( 4) 00:08:48.007 13913.797 - 14014.622: 99.4155% ( 4) 00:08:48.007 14014.622 - 14115.446: 99.4387% ( 4) 00:08:48.007 14115.446 - 14216.271: 99.4618% ( 4) 00:08:48.007 14216.271 - 14317.095: 99.4850% ( 4) 00:08:48.007 14317.095 - 14417.920: 99.5139% ( 5) 00:08:48.007 14417.920 - 14518.745: 99.5370% ( 4) 00:08:48.007 14518.745 - 14619.569: 99.5602% ( 4) 00:08:48.007 14619.569 - 14720.394: 99.5833% ( 4) 00:08:48.007 14720.394 - 14821.218: 99.6065% ( 4) 00:08:48.007 14821.218 - 14922.043: 99.6296% ( 4) 00:08:48.007 18249.255 - 18350.080: 99.6470% ( 3) 00:08:48.007 18350.080 - 18450.905: 99.6701% ( 4) 00:08:48.007 18450.905 - 18551.729: 99.6933% ( 4) 00:08:48.007 18551.729 - 18652.554: 99.7164% ( 4) 00:08:48.007 18652.554 - 18753.378: 99.7396% ( 4) 00:08:48.007 18753.378 - 18854.203: 99.7627% ( 4) 00:08:48.007 18854.203 - 18955.028: 99.7859% ( 4) 00:08:48.007 18955.028 - 19055.852: 99.8090% ( 4) 00:08:48.007 19055.852 - 19156.677: 99.8322% ( 4) 00:08:48.007 19156.677 - 19257.502: 99.8495% ( 3) 00:08:48.007 19257.502 - 19358.326: 99.8727% ( 4) 00:08:48.007 19358.326 - 19459.151: 99.8958% ( 4) 00:08:48.007 19459.151 - 19559.975: 99.9190% ( 4) 00:08:48.007 19559.975 - 19660.800: 99.9421% ( 4) 00:08:48.007 19660.800 - 19761.625: 99.9653% ( 4) 00:08:48.007 19761.625 - 19862.449: 99.9942% ( 5) 00:08:48.007 19862.449 - 19963.274: 100.0000% ( 1) 00:08:48.007 00:08:48.007 03:59:35 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:48.007 00:08:48.007 real 0m2.497s 00:08:48.007 user 0m2.206s 00:08:48.007 sys 0m0.192s 00:08:48.007 03:59:35 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.007 03:59:35 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:48.007 ************************************ 00:08:48.007 END TEST nvme_perf 00:08:48.007 ************************************ 00:08:48.007 03:59:35 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:48.007 03:59:35 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:48.007 03:59:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.007 03:59:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.007 ************************************ 00:08:48.007 START TEST nvme_hello_world 00:08:48.007 ************************************ 00:08:48.007 03:59:35 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:48.263 Initializing NVMe Controllers 00:08:48.263 Attached to 0000:00:10.0 00:08:48.263 Namespace ID: 1 size: 6GB 00:08:48.263 Attached to 0000:00:11.0 00:08:48.263 Namespace ID: 1 size: 5GB 00:08:48.263 Attached to 0000:00:13.0 00:08:48.263 Namespace ID: 1 size: 1GB 00:08:48.263 Attached to 0000:00:12.0 00:08:48.263 Namespace ID: 1 size: 4GB 00:08:48.263 Namespace ID: 2 size: 4GB 00:08:48.263 Namespace ID: 3 size: 4GB 00:08:48.263 Initialization complete. 00:08:48.263 INFO: using host memory buffer for IO 00:08:48.263 Hello world! 00:08:48.263 INFO: using host memory buffer for IO 00:08:48.263 Hello world! 00:08:48.263 INFO: using host memory buffer for IO 00:08:48.263 Hello world! 00:08:48.263 INFO: using host memory buffer for IO 00:08:48.263 Hello world! 00:08:48.263 INFO: using host memory buffer for IO 00:08:48.263 Hello world! 00:08:48.263 INFO: using host memory buffer for IO 00:08:48.263 Hello world! 00:08:48.264 00:08:48.264 real 0m0.217s 00:08:48.264 user 0m0.087s 00:08:48.264 sys 0m0.086s 00:08:48.264 03:59:35 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.264 03:59:35 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:48.264 ************************************ 00:08:48.264 END TEST nvme_hello_world 00:08:48.264 ************************************ 00:08:48.264 03:59:35 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:48.264 03:59:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.264 03:59:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.264 03:59:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.264 ************************************ 00:08:48.264 START TEST nvme_sgl 00:08:48.264 ************************************ 00:08:48.264 03:59:35 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:48.520 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:48.520 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:48.520 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:48.520 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:48.520 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:48.520 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:48.520 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:48.520 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:48.520 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:48.520 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:48.520 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:48.520 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:48.520 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:48.520 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:48.520 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:48.520 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:48.520 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:48.520 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:48.520 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:48.520 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:48.520 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:48.520 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:48.520 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:48.520 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:48.520 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:48.520 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:48.520 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:48.520 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:48.520 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:48.520 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:48.520 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:48.520 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:48.520 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:48.520 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:48.520 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:48.520 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:48.520 NVMe Readv/Writev Request test 00:08:48.520 Attached to 0000:00:10.0 00:08:48.520 Attached to 0000:00:11.0 00:08:48.520 Attached to 0000:00:13.0 00:08:48.520 Attached to 0000:00:12.0 00:08:48.520 0000:00:10.0: build_io_request_2 test passed 00:08:48.520 0000:00:10.0: build_io_request_4 test passed 00:08:48.520 0000:00:10.0: build_io_request_5 test passed 00:08:48.520 0000:00:10.0: build_io_request_6 test passed 00:08:48.520 0000:00:10.0: build_io_request_7 test passed 00:08:48.520 0000:00:10.0: build_io_request_10 test passed 00:08:48.520 0000:00:11.0: build_io_request_2 test passed 00:08:48.520 0000:00:11.0: build_io_request_4 test passed 00:08:48.520 0000:00:11.0: build_io_request_5 test passed 00:08:48.520 0000:00:11.0: build_io_request_6 test passed 00:08:48.521 0000:00:11.0: build_io_request_7 test passed 00:08:48.521 0000:00:11.0: build_io_request_10 test passed 00:08:48.521 Cleaning up... 00:08:48.521 00:08:48.521 real 0m0.283s 00:08:48.521 user 0m0.150s 00:08:48.521 sys 0m0.090s 00:08:48.521 03:59:36 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.521 03:59:36 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:48.521 ************************************ 00:08:48.521 END TEST nvme_sgl 00:08:48.521 ************************************ 00:08:48.521 03:59:36 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:48.521 03:59:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.521 03:59:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.521 03:59:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.777 ************************************ 00:08:48.777 START TEST nvme_e2edp 00:08:48.777 ************************************ 00:08:48.777 03:59:36 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:48.777 NVMe Write/Read with End-to-End data protection test 00:08:48.777 Attached to 0000:00:10.0 00:08:48.777 Attached to 0000:00:11.0 00:08:48.777 Attached to 0000:00:13.0 00:08:48.777 Attached to 0000:00:12.0 00:08:48.777 Cleaning up... 00:08:48.777 00:08:48.777 real 0m0.201s 00:08:48.777 user 0m0.065s 00:08:48.777 sys 0m0.095s 00:08:48.777 03:59:36 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.777 ************************************ 00:08:48.777 END TEST nvme_e2edp 00:08:48.777 03:59:36 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:48.777 ************************************ 00:08:48.777 03:59:36 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:48.777 03:59:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.777 03:59:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.777 03:59:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.777 ************************************ 00:08:48.777 START TEST nvme_reserve 00:08:48.777 ************************************ 00:08:48.777 03:59:36 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:49.035 ===================================================== 00:08:49.035 NVMe Controller at PCI bus 0, device 16, function 0 00:08:49.035 ===================================================== 00:08:49.035 Reservations: Not Supported 00:08:49.035 ===================================================== 00:08:49.035 NVMe Controller at PCI bus 0, device 17, function 0 00:08:49.035 ===================================================== 00:08:49.035 Reservations: Not Supported 00:08:49.035 ===================================================== 00:08:49.035 NVMe Controller at PCI bus 0, device 19, function 0 00:08:49.035 ===================================================== 00:08:49.035 Reservations: Not Supported 00:08:49.035 ===================================================== 00:08:49.035 NVMe Controller at PCI bus 0, device 18, function 0 00:08:49.035 ===================================================== 00:08:49.035 Reservations: Not Supported 00:08:49.035 Reservation test passed 00:08:49.035 00:08:49.035 real 0m0.212s 00:08:49.035 user 0m0.075s 00:08:49.035 sys 0m0.093s 00:08:49.035 03:59:36 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.035 03:59:36 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:49.035 ************************************ 00:08:49.035 END TEST nvme_reserve 00:08:49.035 ************************************ 00:08:49.035 03:59:36 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:49.035 03:59:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.035 03:59:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.035 03:59:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.035 ************************************ 00:08:49.035 START TEST nvme_err_injection 00:08:49.035 ************************************ 00:08:49.035 03:59:36 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:49.293 NVMe Error Injection test 00:08:49.293 Attached to 0000:00:10.0 00:08:49.293 Attached to 0000:00:11.0 00:08:49.293 Attached to 0000:00:13.0 00:08:49.293 Attached to 0000:00:12.0 00:08:49.293 0000:00:11.0: get features failed as expected 00:08:49.293 0000:00:13.0: get features failed as expected 00:08:49.293 0000:00:12.0: get features failed as expected 00:08:49.293 0000:00:10.0: get features failed as expected 00:08:49.293 0000:00:10.0: get features successfully as expected 00:08:49.293 0000:00:11.0: get features successfully as expected 00:08:49.293 0000:00:13.0: get features successfully as expected 00:08:49.293 0000:00:12.0: get features successfully as expected 00:08:49.293 0000:00:10.0: read failed as expected 00:08:49.293 0000:00:11.0: read failed as expected 00:08:49.293 0000:00:13.0: read failed as expected 00:08:49.293 0000:00:12.0: read failed as expected 00:08:49.293 0000:00:10.0: read successfully as expected 00:08:49.293 0000:00:11.0: read successfully as expected 00:08:49.293 0000:00:13.0: read successfully as expected 00:08:49.293 0000:00:12.0: read successfully as expected 00:08:49.293 Cleaning up... 00:08:49.293 00:08:49.293 real 0m0.225s 00:08:49.293 user 0m0.086s 00:08:49.293 sys 0m0.094s 00:08:49.293 03:59:36 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.294 03:59:36 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:49.294 ************************************ 00:08:49.294 END TEST nvme_err_injection 00:08:49.294 ************************************ 00:08:49.294 03:59:36 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:49.294 03:59:36 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:49.294 03:59:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.294 03:59:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.294 ************************************ 00:08:49.294 START TEST nvme_overhead 00:08:49.294 ************************************ 00:08:49.294 03:59:36 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:50.667 Initializing NVMe Controllers 00:08:50.667 Attached to 0000:00:10.0 00:08:50.667 Attached to 0000:00:11.0 00:08:50.667 Attached to 0000:00:13.0 00:08:50.667 Attached to 0000:00:12.0 00:08:50.667 Initialization complete. Launching workers. 00:08:50.667 submit (in ns) avg, min, max = 11361.2, 10696.9, 74388.5 00:08:50.667 complete (in ns) avg, min, max = 7706.5, 7181.5, 285364.6 00:08:50.667 00:08:50.667 Submit histogram 00:08:50.667 ================ 00:08:50.667 Range in us Cumulative Count 00:08:50.667 10.683 - 10.732: 0.0177% ( 3) 00:08:50.667 10.732 - 10.782: 0.1178% ( 17) 00:08:50.667 10.782 - 10.831: 1.5425% ( 242) 00:08:50.667 10.831 - 10.880: 7.6067% ( 1030) 00:08:50.667 10.880 - 10.929: 23.1086% ( 2633) 00:08:50.667 10.929 - 10.978: 44.3156% ( 3602) 00:08:50.667 10.978 - 11.028: 62.9261% ( 3161) 00:08:50.667 11.028 - 11.077: 75.1369% ( 2074) 00:08:50.667 11.077 - 11.126: 81.2835% ( 1044) 00:08:50.667 11.126 - 11.175: 84.3273% ( 517) 00:08:50.667 11.175 - 11.225: 85.7050% ( 234) 00:08:50.667 11.225 - 11.274: 86.4822% ( 132) 00:08:50.667 11.274 - 11.323: 87.0121% ( 90) 00:08:50.667 11.323 - 11.372: 87.5007% ( 83) 00:08:50.667 11.372 - 11.422: 87.9659% ( 79) 00:08:50.667 11.422 - 11.471: 88.3250% ( 61) 00:08:50.667 11.471 - 11.520: 88.8372% ( 87) 00:08:50.667 11.520 - 11.569: 89.3494% ( 87) 00:08:50.667 11.569 - 11.618: 89.7027% ( 60) 00:08:50.667 11.618 - 11.668: 90.1325% ( 73) 00:08:50.667 11.668 - 11.717: 90.6270% ( 84) 00:08:50.667 11.717 - 11.766: 91.2805% ( 111) 00:08:50.667 11.766 - 11.815: 92.1401% ( 146) 00:08:50.667 11.815 - 11.865: 92.8761% ( 125) 00:08:50.667 11.865 - 11.914: 93.6768% ( 136) 00:08:50.667 11.914 - 11.963: 94.2596% ( 99) 00:08:50.667 11.963 - 12.012: 94.8013% ( 92) 00:08:50.667 12.012 - 12.062: 95.1074% ( 52) 00:08:50.667 12.062 - 12.111: 95.4607% ( 60) 00:08:50.667 12.111 - 12.160: 95.7080% ( 42) 00:08:50.667 12.160 - 12.209: 95.8552% ( 25) 00:08:50.667 12.209 - 12.258: 95.9670% ( 19) 00:08:50.667 12.258 - 12.308: 96.1024% ( 23) 00:08:50.667 12.308 - 12.357: 96.1554% ( 9) 00:08:50.667 12.357 - 12.406: 96.1849% ( 5) 00:08:50.667 12.406 - 12.455: 96.2025% ( 3) 00:08:50.667 12.455 - 12.505: 96.2202% ( 3) 00:08:50.667 12.505 - 12.554: 96.2320% ( 2) 00:08:50.667 12.554 - 12.603: 96.2496% ( 3) 00:08:50.667 12.603 - 12.702: 96.2614% ( 2) 00:08:50.667 12.702 - 12.800: 96.2732% ( 2) 00:08:50.667 12.800 - 12.898: 96.2908% ( 3) 00:08:50.667 12.898 - 12.997: 96.3144% ( 4) 00:08:50.667 12.997 - 13.095: 96.4204% ( 18) 00:08:50.667 13.095 - 13.194: 96.5205% ( 17) 00:08:50.667 13.194 - 13.292: 96.6971% ( 30) 00:08:50.667 13.292 - 13.391: 96.8148% ( 20) 00:08:50.667 13.391 - 13.489: 96.9267% ( 19) 00:08:50.667 13.489 - 13.588: 97.0150% ( 15) 00:08:50.667 13.588 - 13.686: 97.0680% ( 9) 00:08:50.667 13.686 - 13.785: 97.1092% ( 7) 00:08:50.667 13.785 - 13.883: 97.1445% ( 6) 00:08:50.667 13.883 - 13.982: 97.1504% ( 1) 00:08:50.667 13.982 - 14.080: 97.2034% ( 9) 00:08:50.667 14.080 - 14.178: 97.2387% ( 6) 00:08:50.667 14.178 - 14.277: 97.2623% ( 4) 00:08:50.667 14.277 - 14.375: 97.2800% ( 3) 00:08:50.667 14.375 - 14.474: 97.3094% ( 5) 00:08:50.667 14.474 - 14.572: 97.3565% ( 8) 00:08:50.667 14.572 - 14.671: 97.3859% ( 5) 00:08:50.667 14.671 - 14.769: 97.4389% ( 9) 00:08:50.667 14.769 - 14.868: 97.4742% ( 6) 00:08:50.667 14.868 - 14.966: 97.5155% ( 7) 00:08:50.667 14.966 - 15.065: 97.5449% ( 5) 00:08:50.667 15.065 - 15.163: 97.5743% ( 5) 00:08:50.667 15.163 - 15.262: 97.6273% ( 9) 00:08:50.667 15.262 - 15.360: 97.6509% ( 4) 00:08:50.667 15.360 - 15.458: 97.6862% ( 6) 00:08:50.667 15.458 - 15.557: 97.7156% ( 5) 00:08:50.667 15.557 - 15.655: 97.7333% ( 3) 00:08:50.667 15.655 - 15.754: 97.7745% ( 7) 00:08:50.668 15.754 - 15.852: 97.7981% ( 4) 00:08:50.668 15.852 - 15.951: 97.8098% ( 2) 00:08:50.668 15.951 - 16.049: 97.8393% ( 5) 00:08:50.668 16.049 - 16.148: 97.8452% ( 1) 00:08:50.668 16.148 - 16.246: 97.8687% ( 4) 00:08:50.668 16.246 - 16.345: 97.8864% ( 3) 00:08:50.668 16.345 - 16.443: 97.9452% ( 10) 00:08:50.668 16.443 - 16.542: 98.0218% ( 13) 00:08:50.668 16.542 - 16.640: 98.1395% ( 20) 00:08:50.668 16.640 - 16.738: 98.2220% ( 14) 00:08:50.668 16.738 - 16.837: 98.2808% ( 10) 00:08:50.668 16.837 - 16.935: 98.3338% ( 9) 00:08:50.668 16.935 - 17.034: 98.3927% ( 10) 00:08:50.668 17.034 - 17.132: 98.4692% ( 13) 00:08:50.668 17.132 - 17.231: 98.5399% ( 12) 00:08:50.668 17.231 - 17.329: 98.5988% ( 10) 00:08:50.668 17.329 - 17.428: 98.6635% ( 11) 00:08:50.668 17.428 - 17.526: 98.6930% ( 5) 00:08:50.668 17.526 - 17.625: 98.7283% ( 6) 00:08:50.668 17.625 - 17.723: 98.7813% ( 9) 00:08:50.668 17.723 - 17.822: 98.8048% ( 4) 00:08:50.668 17.822 - 17.920: 98.8284% ( 4) 00:08:50.668 17.920 - 18.018: 98.8519% ( 4) 00:08:50.668 18.018 - 18.117: 98.8814% ( 5) 00:08:50.668 18.117 - 18.215: 98.8873% ( 1) 00:08:50.668 18.215 - 18.314: 98.9108% ( 4) 00:08:50.668 18.314 - 18.412: 98.9344% ( 4) 00:08:50.668 18.412 - 18.511: 98.9402% ( 1) 00:08:50.668 18.511 - 18.609: 98.9461% ( 1) 00:08:50.668 18.609 - 18.708: 98.9579% ( 2) 00:08:50.668 18.708 - 18.806: 98.9638% ( 1) 00:08:50.668 18.905 - 19.003: 98.9815% ( 3) 00:08:50.668 19.200 - 19.298: 98.9873% ( 1) 00:08:50.668 19.298 - 19.397: 98.9932% ( 1) 00:08:50.668 19.889 - 19.988: 98.9991% ( 1) 00:08:50.668 19.988 - 20.086: 99.0050% ( 1) 00:08:50.668 20.185 - 20.283: 99.0168% ( 2) 00:08:50.668 20.578 - 20.677: 99.0227% ( 1) 00:08:50.668 20.677 - 20.775: 99.0286% ( 1) 00:08:50.668 21.071 - 21.169: 99.0344% ( 1) 00:08:50.668 21.169 - 21.268: 99.0403% ( 1) 00:08:50.668 21.563 - 21.662: 99.0462% ( 1) 00:08:50.668 21.858 - 21.957: 99.0580% ( 2) 00:08:50.668 22.055 - 22.154: 99.0639% ( 1) 00:08:50.668 22.154 - 22.252: 99.0698% ( 1) 00:08:50.668 22.252 - 22.351: 99.0815% ( 2) 00:08:50.668 22.646 - 22.745: 99.0874% ( 1) 00:08:50.668 22.745 - 22.843: 99.0933% ( 1) 00:08:50.668 23.237 - 23.335: 99.0992% ( 1) 00:08:50.668 23.926 - 24.025: 99.1051% ( 1) 00:08:50.668 25.206 - 25.403: 99.1110% ( 1) 00:08:50.668 26.585 - 26.782: 99.1228% ( 2) 00:08:50.668 26.978 - 27.175: 99.1286% ( 1) 00:08:50.668 27.372 - 27.569: 99.2464% ( 20) 00:08:50.668 27.569 - 27.766: 99.5231% ( 47) 00:08:50.668 27.766 - 27.963: 99.6467% ( 21) 00:08:50.668 27.963 - 28.160: 99.6880% ( 7) 00:08:50.668 28.160 - 28.357: 99.7292% ( 7) 00:08:50.668 28.357 - 28.554: 99.7763% ( 8) 00:08:50.668 28.554 - 28.751: 99.7822% ( 1) 00:08:50.668 28.751 - 28.948: 99.7939% ( 2) 00:08:50.668 28.948 - 29.145: 99.8057% ( 2) 00:08:50.668 29.538 - 29.735: 99.8116% ( 1) 00:08:50.668 30.326 - 30.523: 99.8175% ( 1) 00:08:50.668 30.917 - 31.114: 99.8351% ( 3) 00:08:50.668 31.114 - 31.311: 99.8528% ( 3) 00:08:50.668 31.311 - 31.508: 99.8646% ( 2) 00:08:50.668 31.508 - 31.705: 99.8764% ( 2) 00:08:50.668 31.902 - 32.098: 99.8822% ( 1) 00:08:50.668 32.098 - 32.295: 99.8940% ( 2) 00:08:50.668 32.295 - 32.492: 99.8999% ( 1) 00:08:50.668 33.280 - 33.477: 99.9176% ( 3) 00:08:50.668 33.674 - 33.871: 99.9235% ( 1) 00:08:50.668 38.400 - 38.597: 99.9293% ( 1) 00:08:50.668 40.566 - 40.763: 99.9352% ( 1) 00:08:50.668 42.338 - 42.535: 99.9470% ( 2) 00:08:50.668 43.126 - 43.323: 99.9529% ( 1) 00:08:50.668 44.111 - 44.308: 99.9588% ( 1) 00:08:50.668 44.898 - 45.095: 99.9647% ( 1) 00:08:50.668 49.231 - 49.428: 99.9706% ( 1) 00:08:50.668 49.428 - 49.625: 99.9764% ( 1) 00:08:50.668 50.018 - 50.215: 99.9823% ( 1) 00:08:50.668 59.471 - 59.865: 99.9941% ( 2) 00:08:50.668 74.043 - 74.437: 100.0000% ( 1) 00:08:50.668 00:08:50.668 Complete histogram 00:08:50.668 ================== 00:08:50.668 Range in us Cumulative Count 00:08:50.668 7.138 - 7.188: 0.0059% ( 1) 00:08:50.668 7.188 - 7.237: 0.0883% ( 14) 00:08:50.668 7.237 - 7.286: 1.8546% ( 300) 00:08:50.668 7.286 - 7.335: 13.0527% ( 1902) 00:08:50.668 7.335 - 7.385: 39.3583% ( 4468) 00:08:50.668 7.385 - 7.434: 65.0633% ( 4366) 00:08:50.668 7.434 - 7.483: 80.7830% ( 2670) 00:08:50.668 7.483 - 7.532: 88.1484% ( 1251) 00:08:50.668 7.532 - 7.582: 91.7928% ( 619) 00:08:50.668 7.582 - 7.631: 93.2352% ( 245) 00:08:50.668 7.631 - 7.680: 94.0065% ( 131) 00:08:50.668 7.680 - 7.729: 94.4127% ( 69) 00:08:50.668 7.729 - 7.778: 94.5952% ( 31) 00:08:50.668 7.778 - 7.828: 94.7130% ( 20) 00:08:50.668 7.828 - 7.877: 94.8190% ( 18) 00:08:50.668 7.877 - 7.926: 94.8837% ( 11) 00:08:50.668 7.926 - 7.975: 94.9603% ( 13) 00:08:50.668 7.975 - 8.025: 95.0368% ( 13) 00:08:50.668 8.025 - 8.074: 95.2016% ( 28) 00:08:50.668 8.074 - 8.123: 95.4195% ( 37) 00:08:50.668 8.123 - 8.172: 95.7845% ( 62) 00:08:50.668 8.172 - 8.222: 96.1613% ( 64) 00:08:50.668 8.222 - 8.271: 96.4734% ( 53) 00:08:50.668 8.271 - 8.320: 96.6500% ( 30) 00:08:50.668 8.320 - 8.369: 96.7913% ( 24) 00:08:50.668 8.369 - 8.418: 96.9267% ( 23) 00:08:50.668 8.418 - 8.468: 97.0150% ( 15) 00:08:50.668 8.468 - 8.517: 97.0562% ( 7) 00:08:50.668 8.517 - 8.566: 97.0857% ( 5) 00:08:50.668 8.566 - 8.615: 97.0974% ( 2) 00:08:50.668 8.615 - 8.665: 97.1033% ( 1) 00:08:50.668 8.665 - 8.714: 97.1151% ( 2) 00:08:50.668 8.714 - 8.763: 97.1210% ( 1) 00:08:50.668 8.763 - 8.812: 97.1269% ( 1) 00:08:50.668 8.812 - 8.862: 97.1328% ( 1) 00:08:50.668 9.157 - 9.206: 97.1387% ( 1) 00:08:50.668 9.206 - 9.255: 97.1504% ( 2) 00:08:50.668 9.354 - 9.403: 97.1563% ( 1) 00:08:50.668 9.551 - 9.600: 97.1681% ( 2) 00:08:50.668 9.649 - 9.698: 97.1799% ( 2) 00:08:50.668 9.748 - 9.797: 97.1916% ( 2) 00:08:50.668 9.797 - 9.846: 97.1975% ( 1) 00:08:50.668 9.846 - 9.895: 97.2270% ( 5) 00:08:50.668 9.895 - 9.945: 97.2505% ( 4) 00:08:50.668 9.945 - 9.994: 97.2564% ( 1) 00:08:50.668 9.994 - 10.043: 97.2800% ( 4) 00:08:50.668 10.043 - 10.092: 97.2858% ( 1) 00:08:50.668 10.092 - 10.142: 97.2976% ( 2) 00:08:50.668 10.142 - 10.191: 97.3212% ( 4) 00:08:50.668 10.191 - 10.240: 97.3447% ( 4) 00:08:50.668 10.240 - 10.289: 97.3506% ( 1) 00:08:50.668 10.289 - 10.338: 97.3683% ( 3) 00:08:50.668 10.338 - 10.388: 97.3742% ( 1) 00:08:50.668 10.388 - 10.437: 97.3859% ( 2) 00:08:50.668 10.437 - 10.486: 97.4095% ( 4) 00:08:50.668 10.486 - 10.535: 97.4154% ( 1) 00:08:50.668 10.535 - 10.585: 97.4271% ( 2) 00:08:50.668 10.585 - 10.634: 97.4330% ( 1) 00:08:50.668 10.634 - 10.683: 97.4566% ( 4) 00:08:50.668 10.683 - 10.732: 97.4625% ( 1) 00:08:50.668 10.732 - 10.782: 97.4742% ( 2) 00:08:50.668 10.782 - 10.831: 97.4919% ( 3) 00:08:50.668 10.880 - 10.929: 97.4978% ( 1) 00:08:50.668 10.929 - 10.978: 97.5037% ( 1) 00:08:50.668 11.028 - 11.077: 97.5096% ( 1) 00:08:50.668 11.077 - 11.126: 97.5155% ( 1) 00:08:50.668 11.126 - 11.175: 97.5272% ( 2) 00:08:50.668 11.175 - 11.225: 97.5331% ( 1) 00:08:50.668 11.372 - 11.422: 97.5390% ( 1) 00:08:50.668 11.422 - 11.471: 97.5449% ( 1) 00:08:50.668 11.963 - 12.012: 97.5508% ( 1) 00:08:50.668 12.012 - 12.062: 97.5567% ( 1) 00:08:50.668 12.062 - 12.111: 97.5626% ( 1) 00:08:50.668 12.406 - 12.455: 97.5743% ( 2) 00:08:50.668 12.455 - 12.505: 97.5802% ( 1) 00:08:50.668 12.505 - 12.554: 97.5861% ( 1) 00:08:50.668 12.554 - 12.603: 97.5920% ( 1) 00:08:50.668 12.603 - 12.702: 97.6214% ( 5) 00:08:50.668 12.702 - 12.800: 97.6391% ( 3) 00:08:50.668 12.800 - 12.898: 97.6980% ( 10) 00:08:50.668 12.898 - 12.997: 97.7863% ( 15) 00:08:50.668 12.997 - 13.095: 97.8334% ( 8) 00:08:50.668 13.095 - 13.194: 97.9335% ( 17) 00:08:50.668 13.194 - 13.292: 98.0041% ( 12) 00:08:50.668 13.292 - 13.391: 98.0865% ( 14) 00:08:50.668 13.391 - 13.489: 98.1513% ( 11) 00:08:50.668 13.489 - 13.588: 98.2337% ( 14) 00:08:50.668 13.588 - 13.686: 98.2985% ( 11) 00:08:50.668 13.686 - 13.785: 98.3691% ( 12) 00:08:50.668 13.785 - 13.883: 98.4162% ( 8) 00:08:50.668 13.883 - 13.982: 98.5046% ( 15) 00:08:50.668 13.982 - 14.080: 98.5752% ( 12) 00:08:50.668 14.080 - 14.178: 98.6164% ( 7) 00:08:50.668 14.178 - 14.277: 98.6812% ( 11) 00:08:50.668 14.277 - 14.375: 98.7401% ( 10) 00:08:50.668 14.375 - 14.474: 98.7931% ( 9) 00:08:50.668 14.474 - 14.572: 98.8166% ( 4) 00:08:50.668 14.572 - 14.671: 98.8402% ( 4) 00:08:50.668 14.671 - 14.769: 98.8578% ( 3) 00:08:50.668 14.769 - 14.868: 98.8696% ( 2) 00:08:50.668 14.868 - 14.966: 98.8755% ( 1) 00:08:50.668 14.966 - 15.065: 98.8873% ( 2) 00:08:50.669 15.163 - 15.262: 98.8990% ( 2) 00:08:50.669 15.360 - 15.458: 98.9167% ( 3) 00:08:50.669 15.655 - 15.754: 98.9226% ( 1) 00:08:50.669 15.754 - 15.852: 98.9285% ( 1) 00:08:50.669 15.852 - 15.951: 98.9344% ( 1) 00:08:50.669 16.049 - 16.148: 98.9402% ( 1) 00:08:50.669 16.345 - 16.443: 98.9461% ( 1) 00:08:50.669 16.443 - 16.542: 98.9520% ( 1) 00:08:50.669 16.640 - 16.738: 98.9579% ( 1) 00:08:50.669 16.935 - 17.034: 98.9697% ( 2) 00:08:50.669 17.034 - 17.132: 98.9756% ( 1) 00:08:50.669 17.329 - 17.428: 98.9991% ( 4) 00:08:50.669 17.428 - 17.526: 99.0050% ( 1) 00:08:50.669 17.625 - 17.723: 99.0168% ( 2) 00:08:50.669 17.822 - 17.920: 99.0227% ( 1) 00:08:50.669 18.609 - 18.708: 99.0344% ( 2) 00:08:50.669 18.905 - 19.003: 99.0462% ( 2) 00:08:50.669 19.102 - 19.200: 99.0580% ( 2) 00:08:50.669 19.200 - 19.298: 99.0639% ( 1) 00:08:50.669 19.397 - 19.495: 99.0698% ( 1) 00:08:50.669 19.495 - 19.594: 99.0992% ( 5) 00:08:50.669 19.594 - 19.692: 99.2641% ( 28) 00:08:50.669 19.692 - 19.791: 99.4819% ( 37) 00:08:50.669 19.791 - 19.889: 99.6703% ( 32) 00:08:50.669 19.889 - 19.988: 99.7409% ( 12) 00:08:50.669 19.988 - 20.086: 99.7468% ( 1) 00:08:50.669 20.086 - 20.185: 99.7527% ( 1) 00:08:50.669 20.185 - 20.283: 99.7704% ( 3) 00:08:50.669 20.283 - 20.382: 99.7880% ( 3) 00:08:50.669 20.874 - 20.972: 99.7939% ( 1) 00:08:50.669 21.858 - 21.957: 99.7998% ( 1) 00:08:50.669 22.055 - 22.154: 99.8116% ( 2) 00:08:50.669 22.154 - 22.252: 99.8234% ( 2) 00:08:50.669 22.252 - 22.351: 99.8293% ( 1) 00:08:50.669 22.351 - 22.449: 99.8528% ( 4) 00:08:50.669 22.646 - 22.745: 99.8587% ( 1) 00:08:50.669 23.138 - 23.237: 99.8646% ( 1) 00:08:50.669 23.335 - 23.434: 99.8705% ( 1) 00:08:50.669 23.828 - 23.926: 99.8764% ( 1) 00:08:50.669 24.517 - 24.615: 99.8822% ( 1) 00:08:50.669 25.797 - 25.994: 99.8881% ( 1) 00:08:50.669 27.175 - 27.372: 99.8940% ( 1) 00:08:50.669 27.372 - 27.569: 99.9058% ( 2) 00:08:50.669 29.538 - 29.735: 99.9117% ( 1) 00:08:50.669 30.326 - 30.523: 99.9176% ( 1) 00:08:50.669 30.523 - 30.720: 99.9235% ( 1) 00:08:50.669 34.855 - 35.052: 99.9352% ( 2) 00:08:50.669 35.446 - 35.643: 99.9411% ( 1) 00:08:50.669 39.975 - 40.172: 99.9470% ( 1) 00:08:50.669 40.369 - 40.566: 99.9529% ( 1) 00:08:50.669 40.566 - 40.763: 99.9588% ( 1) 00:08:50.669 42.732 - 42.929: 99.9647% ( 1) 00:08:50.669 54.351 - 54.745: 99.9706% ( 1) 00:08:50.669 64.197 - 64.591: 99.9764% ( 1) 00:08:50.669 65.772 - 66.166: 99.9823% ( 1) 00:08:50.669 71.680 - 72.074: 99.9882% ( 1) 00:08:50.669 189.834 - 190.622: 99.9941% ( 1) 00:08:50.669 285.145 - 286.720: 100.0000% ( 1) 00:08:50.669 00:08:50.669 00:08:50.669 real 0m1.210s 00:08:50.669 user 0m1.074s 00:08:50.669 sys 0m0.092s 00:08:50.669 03:59:38 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.669 03:59:38 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:50.669 ************************************ 00:08:50.669 END TEST nvme_overhead 00:08:50.669 ************************************ 00:08:50.669 03:59:38 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:50.669 03:59:38 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:50.669 03:59:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.669 03:59:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.669 ************************************ 00:08:50.669 START TEST nvme_arbitration 00:08:50.669 ************************************ 00:08:50.669 03:59:38 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:53.951 Initializing NVMe Controllers 00:08:53.951 Attached to 0000:00:10.0 00:08:53.951 Attached to 0000:00:11.0 00:08:53.951 Attached to 0000:00:13.0 00:08:53.951 Attached to 0000:00:12.0 00:08:53.951 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:53.951 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:53.951 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:53.951 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:53.951 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:53.951 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:53.951 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:53.951 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:53.951 Initialization complete. Launching workers. 00:08:53.951 Starting thread on core 1 with urgent priority queue 00:08:53.951 Starting thread on core 2 with urgent priority queue 00:08:53.951 Starting thread on core 3 with urgent priority queue 00:08:53.951 Starting thread on core 0 with urgent priority queue 00:08:53.951 QEMU NVMe Ctrl (12340 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:08:53.951 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:08:53.951 QEMU NVMe Ctrl (12341 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:08:53.951 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:08:53.951 QEMU NVMe Ctrl (12343 ) core 2: 1024.00 IO/s 97.66 secs/100000 ios 00:08:53.951 QEMU NVMe Ctrl (12342 ) core 3: 896.00 IO/s 111.61 secs/100000 ios 00:08:53.951 ======================================================== 00:08:53.951 00:08:53.951 00:08:53.951 real 0m3.305s 00:08:53.951 user 0m9.240s 00:08:53.951 sys 0m0.111s 00:08:53.951 03:59:41 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.951 03:59:41 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:53.951 ************************************ 00:08:53.951 END TEST nvme_arbitration 00:08:53.951 ************************************ 00:08:53.951 03:59:41 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:53.951 03:59:41 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:53.951 03:59:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.951 03:59:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.951 ************************************ 00:08:53.951 START TEST nvme_single_aen 00:08:53.951 ************************************ 00:08:53.951 03:59:41 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:54.209 Asynchronous Event Request test 00:08:54.209 Attached to 0000:00:10.0 00:08:54.209 Attached to 0000:00:11.0 00:08:54.209 Attached to 0000:00:13.0 00:08:54.209 Attached to 0000:00:12.0 00:08:54.209 Reset controller to setup AER completions for this process 00:08:54.209 Registering asynchronous event callbacks... 00:08:54.209 Getting orig temperature thresholds of all controllers 00:08:54.209 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:54.209 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:54.209 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:54.209 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:54.209 Setting all controllers temperature threshold low to trigger AER 00:08:54.209 Waiting for all controllers temperature threshold to be set lower 00:08:54.209 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:54.209 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:54.209 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:54.209 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:54.209 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:54.209 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:54.209 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:54.209 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:54.209 Waiting for all controllers to trigger AER and reset threshold 00:08:54.209 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.209 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.209 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.209 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.209 Cleaning up... 00:08:54.209 00:08:54.209 real 0m0.221s 00:08:54.209 user 0m0.082s 00:08:54.209 sys 0m0.095s 00:08:54.209 03:59:41 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.209 03:59:41 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:54.209 ************************************ 00:08:54.209 END TEST nvme_single_aen 00:08:54.209 ************************************ 00:08:54.209 03:59:41 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:54.209 03:59:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.209 03:59:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.209 03:59:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.209 ************************************ 00:08:54.210 START TEST nvme_doorbell_aers 00:08:54.210 ************************************ 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:54.210 03:59:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:54.470 [2024-12-06 03:59:41.914735] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:04.448 Executing: test_write_invalid_db 00:09:04.448 Waiting for AER completion... 00:09:04.448 Failure: test_write_invalid_db 00:09:04.448 00:09:04.448 Executing: test_invalid_db_write_overflow_sq 00:09:04.448 Waiting for AER completion... 00:09:04.448 Failure: test_invalid_db_write_overflow_sq 00:09:04.448 00:09:04.448 Executing: test_invalid_db_write_overflow_cq 00:09:04.448 Waiting for AER completion... 00:09:04.448 Failure: test_invalid_db_write_overflow_cq 00:09:04.448 00:09:04.448 03:59:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:04.448 03:59:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:04.448 [2024-12-06 03:59:51.922186] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:14.404 Executing: test_write_invalid_db 00:09:14.404 Waiting for AER completion... 00:09:14.405 Failure: test_write_invalid_db 00:09:14.405 00:09:14.405 Executing: test_invalid_db_write_overflow_sq 00:09:14.405 Waiting for AER completion... 00:09:14.405 Failure: test_invalid_db_write_overflow_sq 00:09:14.405 00:09:14.405 Executing: test_invalid_db_write_overflow_cq 00:09:14.405 Waiting for AER completion... 00:09:14.405 Failure: test_invalid_db_write_overflow_cq 00:09:14.405 00:09:14.405 04:00:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:14.405 04:00:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:14.662 [2024-12-06 04:00:01.958422] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:24.637 Executing: test_write_invalid_db 00:09:24.637 Waiting for AER completion... 00:09:24.637 Failure: test_write_invalid_db 00:09:24.637 00:09:24.637 Executing: test_invalid_db_write_overflow_sq 00:09:24.637 Waiting for AER completion... 00:09:24.637 Failure: test_invalid_db_write_overflow_sq 00:09:24.637 00:09:24.637 Executing: test_invalid_db_write_overflow_cq 00:09:24.637 Waiting for AER completion... 00:09:24.637 Failure: test_invalid_db_write_overflow_cq 00:09:24.637 00:09:24.637 04:00:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:24.637 04:00:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:24.637 [2024-12-06 04:00:12.001749] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 Executing: test_write_invalid_db 00:09:34.663 Waiting for AER completion... 00:09:34.663 Failure: test_write_invalid_db 00:09:34.663 00:09:34.663 Executing: test_invalid_db_write_overflow_sq 00:09:34.663 Waiting for AER completion... 00:09:34.663 Failure: test_invalid_db_write_overflow_sq 00:09:34.663 00:09:34.663 Executing: test_invalid_db_write_overflow_cq 00:09:34.663 Waiting for AER completion... 00:09:34.663 Failure: test_invalid_db_write_overflow_cq 00:09:34.663 00:09:34.663 00:09:34.663 real 0m40.173s 00:09:34.663 user 0m34.119s 00:09:34.663 sys 0m5.688s 00:09:34.663 04:00:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.663 04:00:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:34.663 ************************************ 00:09:34.663 END TEST nvme_doorbell_aers 00:09:34.663 ************************************ 00:09:34.663 04:00:21 nvme -- nvme/nvme.sh@97 -- # uname 00:09:34.663 04:00:21 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:34.663 04:00:21 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:34.663 04:00:21 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:34.663 04:00:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.663 04:00:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:34.663 ************************************ 00:09:34.663 START TEST nvme_multi_aen 00:09:34.663 ************************************ 00:09:34.663 04:00:21 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:34.663 [2024-12-06 04:00:22.032226] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 [2024-12-06 04:00:22.032294] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 [2024-12-06 04:00:22.032306] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 [2024-12-06 04:00:22.033806] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 [2024-12-06 04:00:22.033851] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 [2024-12-06 04:00:22.033861] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 [2024-12-06 04:00:22.034906] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 [2024-12-06 04:00:22.034938] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 [2024-12-06 04:00:22.034948] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 [2024-12-06 04:00:22.035884] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 [2024-12-06 04:00:22.035914] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 [2024-12-06 04:00:22.035923] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63226) is not found. Dropping the request. 00:09:34.663 Child process pid: 63752 00:09:34.922 [Child] Asynchronous Event Request test 00:09:34.922 [Child] Attached to 0000:00:10.0 00:09:34.922 [Child] Attached to 0000:00:11.0 00:09:34.922 [Child] Attached to 0000:00:13.0 00:09:34.922 [Child] Attached to 0000:00:12.0 00:09:34.922 [Child] Registering asynchronous event callbacks... 00:09:34.922 [Child] Getting orig temperature thresholds of all controllers 00:09:34.922 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:34.922 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:34.922 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:34.922 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:34.922 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:34.922 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:34.922 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:34.922 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:34.922 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:34.922 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:34.922 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:34.922 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:34.922 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:34.922 [Child] Cleaning up... 00:09:34.922 Asynchronous Event Request test 00:09:34.922 Attached to 0000:00:10.0 00:09:34.922 Attached to 0000:00:11.0 00:09:34.922 Attached to 0000:00:13.0 00:09:34.923 Attached to 0000:00:12.0 00:09:34.923 Reset controller to setup AER completions for this process 00:09:34.923 Registering asynchronous event callbacks... 00:09:34.923 Getting orig temperature thresholds of all controllers 00:09:34.923 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:34.923 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:34.923 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:34.923 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:34.923 Setting all controllers temperature threshold low to trigger AER 00:09:34.923 Waiting for all controllers temperature threshold to be set lower 00:09:34.923 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:34.923 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:34.923 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:34.923 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:34.923 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:34.923 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:34.923 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:34.923 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:34.923 Waiting for all controllers to trigger AER and reset threshold 00:09:34.923 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:34.923 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:34.923 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:34.923 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:34.923 Cleaning up... 00:09:34.923 00:09:34.923 real 0m0.424s 00:09:34.923 user 0m0.146s 00:09:34.923 sys 0m0.178s 00:09:34.923 04:00:22 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.923 04:00:22 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:34.923 ************************************ 00:09:34.923 END TEST nvme_multi_aen 00:09:34.923 ************************************ 00:09:34.923 04:00:22 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:34.923 04:00:22 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:34.923 04:00:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.923 04:00:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:34.923 ************************************ 00:09:34.923 START TEST nvme_startup 00:09:34.923 ************************************ 00:09:34.923 04:00:22 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:35.182 Initializing NVMe Controllers 00:09:35.182 Attached to 0000:00:10.0 00:09:35.182 Attached to 0000:00:11.0 00:09:35.182 Attached to 0000:00:13.0 00:09:35.182 Attached to 0000:00:12.0 00:09:35.182 Initialization complete. 00:09:35.182 Time used:138722.875 (us). 00:09:35.182 00:09:35.182 real 0m0.194s 00:09:35.182 user 0m0.074s 00:09:35.182 sys 0m0.078s 00:09:35.182 04:00:22 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.182 04:00:22 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:35.182 ************************************ 00:09:35.182 END TEST nvme_startup 00:09:35.182 ************************************ 00:09:35.182 04:00:22 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:35.182 04:00:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.182 04:00:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.182 04:00:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.182 ************************************ 00:09:35.182 START TEST nvme_multi_secondary 00:09:35.182 ************************************ 00:09:35.182 04:00:22 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:35.182 04:00:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63797 00:09:35.182 04:00:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63798 00:09:35.182 04:00:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:35.182 04:00:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:35.182 04:00:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:38.514 Initializing NVMe Controllers 00:09:38.514 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:38.514 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:38.514 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:38.514 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:38.514 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:38.514 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:38.514 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:38.514 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:38.514 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:38.514 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:38.514 Initialization complete. Launching workers. 00:09:38.514 ======================================================== 00:09:38.514 Latency(us) 00:09:38.514 Device Information : IOPS MiB/s Average min max 00:09:38.514 PCIE (0000:00:10.0) NSID 1 from core 2: 3126.12 12.21 5115.99 746.09 27491.05 00:09:38.514 PCIE (0000:00:11.0) NSID 1 from core 2: 3126.12 12.21 5117.29 768.46 31370.97 00:09:38.514 PCIE (0000:00:13.0) NSID 1 from core 2: 3126.12 12.21 5117.67 742.24 25678.33 00:09:38.514 PCIE (0000:00:12.0) NSID 1 from core 2: 3126.12 12.21 5124.62 745.93 21680.90 00:09:38.514 PCIE (0000:00:12.0) NSID 2 from core 2: 3126.12 12.21 5124.60 752.28 23939.71 00:09:38.514 PCIE (0000:00:12.0) NSID 3 from core 2: 3126.12 12.21 5125.51 764.00 25825.61 00:09:38.514 ======================================================== 00:09:38.514 Total : 18756.71 73.27 5120.95 742.24 31370.97 00:09:38.514 00:09:38.514 Initializing NVMe Controllers 00:09:38.514 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:38.514 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:38.514 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:38.514 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:38.514 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:38.514 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:38.514 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:38.514 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:38.514 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:38.514 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:38.514 Initialization complete. Launching workers. 00:09:38.514 ======================================================== 00:09:38.514 Latency(us) 00:09:38.514 Device Information : IOPS MiB/s Average min max 00:09:38.514 PCIE (0000:00:10.0) NSID 1 from core 1: 7617.85 29.76 2098.89 695.19 8237.14 00:09:38.514 PCIE (0000:00:11.0) NSID 1 from core 1: 7617.85 29.76 2099.88 741.16 7621.00 00:09:38.514 PCIE (0000:00:13.0) NSID 1 from core 1: 7617.85 29.76 2099.82 731.25 8113.83 00:09:38.514 PCIE (0000:00:12.0) NSID 1 from core 1: 7617.85 29.76 2099.80 725.32 7599.13 00:09:38.514 PCIE (0000:00:12.0) NSID 2 from core 1: 7617.85 29.76 2099.82 715.06 7729.69 00:09:38.514 PCIE (0000:00:12.0) NSID 3 from core 1: 7617.85 29.76 2099.76 721.64 7673.61 00:09:38.514 ======================================================== 00:09:38.514 Total : 45707.08 178.54 2099.66 695.19 8237.14 00:09:38.514 00:09:38.514 04:00:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63797 00:09:41.041 Initializing NVMe Controllers 00:09:41.041 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:41.041 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:41.041 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:41.041 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:41.041 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:41.041 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:41.041 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:41.041 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:41.041 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:41.041 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:41.041 Initialization complete. Launching workers. 00:09:41.041 ======================================================== 00:09:41.041 Latency(us) 00:09:41.041 Device Information : IOPS MiB/s Average min max 00:09:41.041 PCIE (0000:00:10.0) NSID 1 from core 0: 10908.25 42.61 1465.52 670.10 10694.25 00:09:41.041 PCIE (0000:00:11.0) NSID 1 from core 0: 10908.25 42.61 1466.38 684.91 10264.61 00:09:41.041 PCIE (0000:00:13.0) NSID 1 from core 0: 10907.65 42.61 1466.43 670.16 9414.84 00:09:41.041 PCIE (0000:00:12.0) NSID 1 from core 0: 10908.25 42.61 1466.32 649.13 9412.19 00:09:41.041 PCIE (0000:00:12.0) NSID 2 from core 0: 10908.25 42.61 1466.29 631.99 9452.21 00:09:41.041 PCIE (0000:00:12.0) NSID 3 from core 0: 10908.25 42.61 1466.28 597.97 10957.95 00:09:41.041 ======================================================== 00:09:41.041 Total : 65448.90 255.66 1466.20 597.97 10957.95 00:09:41.041 00:09:41.041 04:00:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63798 00:09:41.041 04:00:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63878 00:09:41.041 04:00:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:41.041 04:00:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63879 00:09:41.041 04:00:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:41.041 04:00:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:44.321 Initializing NVMe Controllers 00:09:44.321 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:44.321 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:44.321 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:44.321 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:44.321 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:44.321 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:44.321 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:44.321 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:44.321 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:44.321 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:44.321 Initialization complete. Launching workers. 00:09:44.321 ======================================================== 00:09:44.321 Latency(us) 00:09:44.321 Device Information : IOPS MiB/s Average min max 00:09:44.321 PCIE (0000:00:10.0) NSID 1 from core 0: 4963.83 19.39 3221.53 719.20 11862.40 00:09:44.321 PCIE (0000:00:11.0) NSID 1 from core 0: 4963.83 19.39 3225.45 732.56 11939.42 00:09:44.321 PCIE (0000:00:13.0) NSID 1 from core 0: 4963.83 19.39 3225.46 738.59 11516.85 00:09:44.321 PCIE (0000:00:12.0) NSID 1 from core 0: 4963.83 19.39 3226.29 725.52 12287.19 00:09:44.321 PCIE (0000:00:12.0) NSID 2 from core 0: 4963.83 19.39 3226.26 738.47 12202.24 00:09:44.321 PCIE (0000:00:12.0) NSID 3 from core 0: 4963.83 19.39 3226.97 730.32 12241.64 00:09:44.321 ======================================================== 00:09:44.321 Total : 29783.00 116.34 3225.33 719.20 12287.19 00:09:44.321 00:09:44.321 Initializing NVMe Controllers 00:09:44.321 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:44.321 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:44.321 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:44.321 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:44.321 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:44.321 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:44.321 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:44.321 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:44.321 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:44.321 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:44.321 Initialization complete. Launching workers. 00:09:44.321 ======================================================== 00:09:44.321 Latency(us) 00:09:44.321 Device Information : IOPS MiB/s Average min max 00:09:44.321 PCIE (0000:00:10.0) NSID 1 from core 1: 4937.99 19.29 3238.56 836.41 12825.42 00:09:44.321 PCIE (0000:00:11.0) NSID 1 from core 1: 4937.99 19.29 3240.28 834.40 13953.77 00:09:44.321 PCIE (0000:00:13.0) NSID 1 from core 1: 4937.99 19.29 3240.27 851.77 11968.43 00:09:44.321 PCIE (0000:00:12.0) NSID 1 from core 1: 4937.99 19.29 3240.25 848.01 12256.95 00:09:44.321 PCIE (0000:00:12.0) NSID 2 from core 1: 4937.99 19.29 3240.23 862.98 11805.53 00:09:44.321 PCIE (0000:00:12.0) NSID 3 from core 1: 4937.99 19.29 3240.23 868.00 12291.56 00:09:44.321 ======================================================== 00:09:44.321 Total : 29627.95 115.73 3239.97 834.40 13953.77 00:09:44.321 00:09:46.221 Initializing NVMe Controllers 00:09:46.221 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:46.221 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:46.221 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:46.221 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:46.221 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:46.221 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:46.221 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:46.221 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:46.221 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:46.221 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:46.221 Initialization complete. Launching workers. 00:09:46.221 ======================================================== 00:09:46.221 Latency(us) 00:09:46.221 Device Information : IOPS MiB/s Average min max 00:09:46.221 PCIE (0000:00:10.0) NSID 1 from core 2: 2606.63 10.18 6136.19 740.97 39554.83 00:09:46.221 PCIE (0000:00:11.0) NSID 1 from core 2: 2606.63 10.18 6137.70 714.88 33370.37 00:09:46.221 PCIE (0000:00:13.0) NSID 1 from core 2: 2606.63 10.18 6137.63 720.04 33441.56 00:09:46.221 PCIE (0000:00:12.0) NSID 1 from core 2: 2606.63 10.18 6136.94 699.88 32269.87 00:09:46.221 PCIE (0000:00:12.0) NSID 2 from core 2: 2606.63 10.18 6137.19 659.76 32984.63 00:09:46.221 PCIE (0000:00:12.0) NSID 3 from core 2: 2606.63 10.18 6137.44 608.00 38340.97 00:09:46.221 ======================================================== 00:09:46.221 Total : 15639.78 61.09 6137.18 608.00 39554.83 00:09:46.221 00:09:46.221 04:00:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63878 00:09:46.221 04:00:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63879 00:09:46.221 00:09:46.221 real 0m10.866s 00:09:46.221 user 0m18.360s 00:09:46.221 sys 0m0.663s 00:09:46.221 ************************************ 00:09:46.221 END TEST nvme_multi_secondary 00:09:46.222 ************************************ 00:09:46.222 04:00:33 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.222 04:00:33 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:46.222 04:00:33 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:46.222 04:00:33 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:46.222 04:00:33 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62835 ]] 00:09:46.222 04:00:33 nvme -- common/autotest_common.sh@1094 -- # kill 62835 00:09:46.222 04:00:33 nvme -- common/autotest_common.sh@1095 -- # wait 62835 00:09:46.222 [2024-12-06 04:00:33.473259] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.473332] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.473363] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.473382] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.475874] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.475915] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.475927] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.475940] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.477712] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.477761] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.477773] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.477788] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.479478] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.479517] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.479527] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.479538] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63751) is not found. Dropping the request. 00:09:46.222 [2024-12-06 04:00:33.584038] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:09:46.222 04:00:33 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:46.222 04:00:33 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:46.222 04:00:33 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:46.222 04:00:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.222 04:00:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.222 04:00:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:46.222 ************************************ 00:09:46.222 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:46.222 ************************************ 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:46.222 * Looking for test storage... 00:09:46.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.222 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.481 --rc genhtml_branch_coverage=1 00:09:46.481 --rc genhtml_function_coverage=1 00:09:46.481 --rc genhtml_legend=1 00:09:46.481 --rc geninfo_all_blocks=1 00:09:46.481 --rc geninfo_unexecuted_blocks=1 00:09:46.481 00:09:46.481 ' 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.481 --rc genhtml_branch_coverage=1 00:09:46.481 --rc genhtml_function_coverage=1 00:09:46.481 --rc genhtml_legend=1 00:09:46.481 --rc geninfo_all_blocks=1 00:09:46.481 --rc geninfo_unexecuted_blocks=1 00:09:46.481 00:09:46.481 ' 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.481 --rc genhtml_branch_coverage=1 00:09:46.481 --rc genhtml_function_coverage=1 00:09:46.481 --rc genhtml_legend=1 00:09:46.481 --rc geninfo_all_blocks=1 00:09:46.481 --rc geninfo_unexecuted_blocks=1 00:09:46.481 00:09:46.481 ' 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.481 --rc genhtml_branch_coverage=1 00:09:46.481 --rc genhtml_function_coverage=1 00:09:46.481 --rc genhtml_legend=1 00:09:46.481 --rc geninfo_all_blocks=1 00:09:46.481 --rc geninfo_unexecuted_blocks=1 00:09:46.481 00:09:46.481 ' 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:46.481 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:46.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.482 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:46.482 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:46.482 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64035 00:09:46.482 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:46.482 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64035 00:09:46.482 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64035 ']' 00:09:46.482 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.482 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.482 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.482 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.482 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:46.482 04:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:46.482 [2024-12-06 04:00:33.886327] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:09:46.482 [2024-12-06 04:00:33.886446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64035 ] 00:09:46.740 [2024-12-06 04:00:34.057537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.740 [2024-12-06 04:00:34.162761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.740 [2024-12-06 04:00:34.162952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.740 [2024-12-06 04:00:34.163247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.740 [2024-12-06 04:00:34.163270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.310 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.310 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:47.310 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:47.310 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.310 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:47.310 nvme0n1 00:09:47.310 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.310 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:47.310 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_aQ6KC.txt 00:09:47.310 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:47.310 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.310 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:47.571 true 00:09:47.571 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.571 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:47.571 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733457634 00:09:47.571 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64058 00:09:47.571 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:47.571 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:47.571 04:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:49.469 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:49.469 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.469 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:49.469 [2024-12-06 04:00:36.851439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:49.469 [2024-12-06 04:00:36.851812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:49.469 [2024-12-06 04:00:36.851842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:49.469 [2024-12-06 04:00:36.851856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.469 [2024-12-06 04:00:36.853754] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:49.469 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64058 00:09:49.469 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64058 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64058 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_aQ6KC.txt 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_aQ6KC.txt 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64035 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64035 ']' 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64035 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64035 00:09:49.470 killing process with pid 64035 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64035' 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64035 00:09:49.470 04:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64035 00:09:51.384 04:00:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:51.384 04:00:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:51.384 ************************************ 00:09:51.384 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:51.384 ************************************ 00:09:51.384 00:09:51.384 real 0m4.786s 00:09:51.384 user 0m17.042s 00:09:51.384 sys 0m0.478s 00:09:51.384 04:00:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.384 04:00:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:51.384 04:00:38 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:51.384 04:00:38 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:51.384 04:00:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.384 04:00:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.384 04:00:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.384 ************************************ 00:09:51.384 START TEST nvme_fio 00:09:51.384 ************************************ 00:09:51.384 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:51.384 04:00:38 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:51.384 04:00:38 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:51.384 04:00:38 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:51.384 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:51.384 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:51.384 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:51.384 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:51.384 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:51.384 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:51.384 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:51.384 04:00:38 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:51.384 04:00:38 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:51.384 04:00:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:51.384 04:00:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:51.384 04:00:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:51.384 04:00:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:51.384 04:00:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:51.673 04:00:38 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:51.673 04:00:38 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:51.673 04:00:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:51.673 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:51.673 fio-3.35 00:09:51.673 Starting 1 thread 00:09:56.937 00:09:56.937 test: (groupid=0, jobs=1): err= 0: pid=64200: Fri Dec 6 04:00:43 2024 00:09:56.937 read: IOPS=19.1k, BW=74.8MiB/s (78.4MB/s)(151MiB/2013msec) 00:09:56.937 slat (nsec): min=3373, max=84977, avg=5013.35, stdev=2248.23 00:09:56.937 clat (usec): min=662, max=13426, avg=2793.56, stdev=954.52 00:09:56.937 lat (usec): min=666, max=13430, avg=2798.57, stdev=955.37 00:09:56.937 clat percentiles (usec): 00:09:56.937 | 1.00th=[ 1319], 5.00th=[ 1844], 10.00th=[ 2180], 20.00th=[ 2409], 00:09:56.937 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2573], 00:09:56.937 | 70.00th=[ 2638], 80.00th=[ 2933], 90.00th=[ 3949], 95.00th=[ 4948], 00:09:56.937 | 99.00th=[ 6194], 99.50th=[ 6980], 99.90th=[ 8356], 99.95th=[12780], 00:09:56.937 | 99.99th=[13435] 00:09:56.937 bw ( KiB/s): min=41176, max=94736, per=100.00%, avg=76998.00, stdev=25093.90, samples=4 00:09:56.937 iops : min=10294, max=23684, avg=19249.50, stdev=6273.48, samples=4 00:09:56.937 write: IOPS=19.1k, BW=74.7MiB/s (78.3MB/s)(150MiB/2013msec); 0 zone resets 00:09:56.937 slat (nsec): min=3486, max=74942, avg=5339.73, stdev=2255.45 00:09:56.937 clat (usec): min=683, max=28906, avg=3877.18, stdev=3968.07 00:09:56.937 lat (usec): min=688, max=28911, avg=3882.51, stdev=3968.45 00:09:56.937 clat percentiles (usec): 00:09:56.937 | 1.00th=[ 1450], 5.00th=[ 2008], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:56.937 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:09:56.937 | 70.00th=[ 2737], 80.00th=[ 3425], 90.00th=[ 5866], 95.00th=[14615], 00:09:56.937 | 99.00th=[21365], 99.50th=[23462], 99.90th=[27132], 99.95th=[27919], 00:09:56.937 | 99.99th=[28443] 00:09:56.937 bw ( KiB/s): min=40232, max=93904, per=100.00%, avg=76772.00, stdev=25342.00, samples=4 00:09:56.937 iops : min=10058, max=23476, avg=19193.00, stdev=6335.50, samples=4 00:09:56.937 lat (usec) : 750=0.01%, 1000=0.05% 00:09:56.937 lat (msec) : 2=5.88%, 4=81.02%, 10=9.11%, 20=3.14%, 50=0.78% 00:09:56.937 cpu : usr=99.35%, sys=0.00%, ctx=8, majf=0, minf=608 00:09:56.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:56.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.937 issued rwts: total=38535,38477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.937 00:09:56.937 Run status group 0 (all jobs): 00:09:56.937 READ: bw=74.8MiB/s (78.4MB/s), 74.8MiB/s-74.8MiB/s (78.4MB/s-78.4MB/s), io=151MiB (158MB), run=2013-2013msec 00:09:56.937 WRITE: bw=74.7MiB/s (78.3MB/s), 74.7MiB/s-74.7MiB/s (78.3MB/s-78.3MB/s), io=150MiB (158MB), run=2013-2013msec 00:09:56.937 ----------------------------------------------------- 00:09:56.937 Suppressions used: 00:09:56.937 count bytes template 00:09:56.937 1 32 /usr/src/fio/parse.c 00:09:56.937 1 8 libtcmalloc_minimal.so 00:09:56.937 ----------------------------------------------------- 00:09:56.937 00:09:56.937 04:00:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:56.937 04:00:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:56.937 04:00:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:56.937 04:00:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:56.937 04:00:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:56.937 04:00:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:56.937 04:00:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:56.937 04:00:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:56.937 04:00:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:56.937 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:56.937 fio-3.35 00:09:56.937 Starting 1 thread 00:10:01.160 00:10:01.160 test: (groupid=0, jobs=1): err= 0: pid=64255: Fri Dec 6 04:00:47 2024 00:10:01.160 read: IOPS=16.1k, BW=63.1MiB/s (66.1MB/s)(127MiB/2010msec) 00:10:01.160 slat (nsec): min=3366, max=79408, avg=5216.31, stdev=2386.33 00:10:01.160 clat (usec): min=832, max=12099, avg=2723.05, stdev=1012.36 00:10:01.160 lat (usec): min=836, max=12103, avg=2728.27, stdev=1012.77 00:10:01.160 clat percentiles (usec): 00:10:01.160 | 1.00th=[ 1188], 5.00th=[ 1434], 10.00th=[ 1713], 20.00th=[ 2180], 00:10:01.160 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2606], 00:10:01.160 | 70.00th=[ 2704], 80.00th=[ 3130], 90.00th=[ 3851], 95.00th=[ 4621], 00:10:01.160 | 99.00th=[ 6587], 99.50th=[ 7177], 99.90th=[11600], 99.95th=[11731], 00:10:01.160 | 99.99th=[11994] 00:10:01.160 bw ( KiB/s): min=38848, max=97824, per=100.00%, avg=64830.00, stdev=26937.58, samples=4 00:10:01.160 iops : min= 9712, max=24456, avg=16207.50, stdev=6734.39, samples=4 00:10:01.160 write: IOPS=16.2k, BW=63.2MiB/s (66.3MB/s)(127MiB/2010msec); 0 zone resets 00:10:01.160 slat (nsec): min=3510, max=58509, avg=5503.59, stdev=2280.46 00:10:01.160 clat (usec): min=832, max=25171, avg=5165.05, stdev=4735.65 00:10:01.160 lat (usec): min=837, max=25176, avg=5170.56, stdev=4735.93 00:10:01.160 clat percentiles (usec): 00:10:01.160 | 1.00th=[ 1352], 5.00th=[ 1795], 10.00th=[ 2212], 20.00th=[ 2442], 00:10:01.160 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2900], 00:10:01.160 | 70.00th=[ 4080], 80.00th=[ 9372], 90.00th=[13042], 95.00th=[15664], 00:10:01.160 | 99.00th=[20317], 99.50th=[22676], 99.90th=[23987], 99.95th=[24249], 00:10:01.160 | 99.99th=[24773] 00:10:01.160 bw ( KiB/s): min=39680, max=97520, per=100.00%, avg=64846.00, stdev=26449.99, samples=4 00:10:01.160 iops : min= 9920, max=24380, avg=16211.50, stdev=6612.50, samples=4 00:10:01.160 lat (usec) : 1000=0.06% 00:10:01.160 lat (msec) : 2=11.34%, 4=69.19%, 10=10.02%, 20=8.80%, 50=0.58% 00:10:01.160 cpu : usr=99.30%, sys=0.00%, ctx=6, majf=0, minf=608 00:10:01.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:01.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.160 issued rwts: total=32444,32521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.160 00:10:01.160 Run status group 0 (all jobs): 00:10:01.160 READ: bw=63.1MiB/s (66.1MB/s), 63.1MiB/s-63.1MiB/s (66.1MB/s-66.1MB/s), io=127MiB (133MB), run=2010-2010msec 00:10:01.160 WRITE: bw=63.2MiB/s (66.3MB/s), 63.2MiB/s-63.2MiB/s (66.3MB/s-66.3MB/s), io=127MiB (133MB), run=2010-2010msec 00:10:01.160 ----------------------------------------------------- 00:10:01.160 Suppressions used: 00:10:01.160 count bytes template 00:10:01.160 1 32 /usr/src/fio/parse.c 00:10:01.160 1 8 libtcmalloc_minimal.so 00:10:01.160 ----------------------------------------------------- 00:10:01.160 00:10:01.160 04:00:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:01.160 04:00:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:01.160 04:00:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:01.160 04:00:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:01.160 04:00:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:01.160 04:00:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:01.160 04:00:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:01.160 04:00:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:01.160 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:01.160 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:01.160 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:01.160 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:01.161 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:01.161 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:01.161 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:01.161 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:01.161 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:01.161 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:01.161 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:01.161 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:01.161 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:01.161 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:01.161 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:01.161 04:00:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:01.419 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:01.419 fio-3.35 00:10:01.419 Starting 1 thread 00:10:07.976 00:10:07.976 test: (groupid=0, jobs=1): err= 0: pid=64321: Fri Dec 6 04:00:55 2024 00:10:07.976 read: IOPS=23.1k, BW=90.4MiB/s (94.8MB/s)(181MiB/2001msec) 00:10:07.976 slat (nsec): min=3326, max=55836, avg=5026.02, stdev=2342.81 00:10:07.976 clat (usec): min=239, max=8158, avg=2761.99, stdev=870.03 00:10:07.976 lat (usec): min=243, max=8163, avg=2767.01, stdev=871.49 00:10:07.976 clat percentiles (usec): 00:10:07.976 | 1.00th=[ 1614], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2376], 00:10:07.976 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:10:07.976 | 70.00th=[ 2638], 80.00th=[ 2769], 90.00th=[ 3589], 95.00th=[ 4883], 00:10:07.976 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7635], 99.95th=[ 7767], 00:10:07.976 | 99.99th=[ 7898] 00:10:07.976 bw ( KiB/s): min=86928, max=96888, per=98.82%, avg=91469.33, stdev=5037.63, samples=3 00:10:07.976 iops : min=21732, max=24222, avg=22867.33, stdev=1259.41, samples=3 00:10:07.976 write: IOPS=23.0k, BW=89.8MiB/s (94.2MB/s)(180MiB/2001msec); 0 zone resets 00:10:07.976 slat (nsec): min=3473, max=65317, avg=5306.30, stdev=2449.74 00:10:07.976 clat (usec): min=224, max=8248, avg=2763.66, stdev=875.09 00:10:07.976 lat (usec): min=229, max=8253, avg=2768.96, stdev=876.61 00:10:07.976 clat percentiles (usec): 00:10:07.976 | 1.00th=[ 1598], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2376], 00:10:07.976 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:10:07.976 | 70.00th=[ 2638], 80.00th=[ 2769], 90.00th=[ 3589], 95.00th=[ 4883], 00:10:07.976 | 99.00th=[ 6390], 99.50th=[ 6849], 99.90th=[ 7767], 99.95th=[ 7832], 00:10:07.976 | 99.99th=[ 8029] 00:10:07.976 bw ( KiB/s): min=86648, max=97768, per=99.56%, avg=91592.00, stdev=5661.45, samples=3 00:10:07.976 iops : min=21662, max=24442, avg=22898.00, stdev=1415.36, samples=3 00:10:07.976 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:07.976 lat (msec) : 2=3.45%, 4=88.57%, 10=7.93% 00:10:07.976 cpu : usr=99.20%, sys=0.10%, ctx=3, majf=0, minf=608 00:10:07.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:07.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.976 issued rwts: total=46306,46022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.976 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.976 00:10:07.976 Run status group 0 (all jobs): 00:10:07.976 READ: bw=90.4MiB/s (94.8MB/s), 90.4MiB/s-90.4MiB/s (94.8MB/s-94.8MB/s), io=181MiB (190MB), run=2001-2001msec 00:10:07.976 WRITE: bw=89.8MiB/s (94.2MB/s), 89.8MiB/s-89.8MiB/s (94.2MB/s-94.2MB/s), io=180MiB (189MB), run=2001-2001msec 00:10:08.232 ----------------------------------------------------- 00:10:08.232 Suppressions used: 00:10:08.232 count bytes template 00:10:08.232 1 32 /usr/src/fio/parse.c 00:10:08.232 1 8 libtcmalloc_minimal.so 00:10:08.232 ----------------------------------------------------- 00:10:08.232 00:10:08.232 04:00:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:08.232 04:00:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:08.232 04:00:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:08.232 04:00:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:08.569 04:00:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:08.569 04:00:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:08.826 04:00:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:08.826 04:00:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:08.826 04:00:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:08.826 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:08.826 fio-3.35 00:10:08.826 Starting 1 thread 00:10:18.816 00:10:18.816 test: (groupid=0, jobs=1): err= 0: pid=64383: Fri Dec 6 04:01:05 2024 00:10:18.816 read: IOPS=23.9k, BW=93.5MiB/s (98.0MB/s)(187MiB/2001msec) 00:10:18.816 slat (usec): min=3, max=133, avg= 4.91, stdev= 2.16 00:10:18.816 clat (usec): min=185, max=7372, avg=2665.10, stdev=724.26 00:10:18.816 lat (usec): min=188, max=7428, avg=2670.01, stdev=725.48 00:10:18.816 clat percentiles (usec): 00:10:18.816 | 1.00th=[ 1598], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2376], 00:10:18.816 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:10:18.816 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3130], 95.00th=[ 4359], 00:10:18.816 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 6718], 99.95th=[ 6783], 00:10:18.816 | 99.99th=[ 7046] 00:10:18.816 bw ( KiB/s): min=93848, max=96136, per=99.16%, avg=94934.00, stdev=1148.40, samples=3 00:10:18.816 iops : min=23462, max=24034, avg=23733.33, stdev=287.13, samples=3 00:10:18.816 write: IOPS=23.8k, BW=92.9MiB/s (97.4MB/s)(186MiB/2001msec); 0 zone resets 00:10:18.817 slat (nsec): min=3417, max=64757, avg=5213.15, stdev=2102.23 00:10:18.817 clat (usec): min=191, max=7208, avg=2677.56, stdev=740.82 00:10:18.817 lat (usec): min=195, max=7228, avg=2682.78, stdev=742.07 00:10:18.817 clat percentiles (usec): 00:10:18.817 | 1.00th=[ 1647], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2376], 00:10:18.817 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:10:18.817 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3195], 95.00th=[ 4490], 00:10:18.817 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 6652], 99.95th=[ 6783], 00:10:18.817 | 99.99th=[ 7046] 00:10:18.817 bw ( KiB/s): min=94680, max=95328, per=99.79%, avg=94936.67, stdev=344.35, samples=3 00:10:18.817 iops : min=23670, max=23832, avg=23734.00, stdev=86.19, samples=3 00:10:18.817 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.05% 00:10:18.817 lat (msec) : 2=2.99%, 4=90.78%, 10=6.13% 00:10:18.817 cpu : usr=99.30%, sys=0.00%, ctx=13, majf=0, minf=606 00:10:18.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:18.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.817 issued rwts: total=47891,47591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.817 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.817 00:10:18.817 Run status group 0 (all jobs): 00:10:18.817 READ: bw=93.5MiB/s (98.0MB/s), 93.5MiB/s-93.5MiB/s (98.0MB/s-98.0MB/s), io=187MiB (196MB), run=2001-2001msec 00:10:18.817 WRITE: bw=92.9MiB/s (97.4MB/s), 92.9MiB/s-92.9MiB/s (97.4MB/s-97.4MB/s), io=186MiB (195MB), run=2001-2001msec 00:10:18.817 ----------------------------------------------------- 00:10:18.817 Suppressions used: 00:10:18.817 count bytes template 00:10:18.817 1 32 /usr/src/fio/parse.c 00:10:18.817 1 8 libtcmalloc_minimal.so 00:10:18.817 ----------------------------------------------------- 00:10:18.817 00:10:18.817 04:01:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:18.817 04:01:05 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:18.817 00:10:18.817 real 0m27.048s 00:10:18.817 user 0m15.654s 00:10:18.817 sys 0m19.965s 00:10:18.817 04:01:05 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.817 04:01:05 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:18.817 ************************************ 00:10:18.817 END TEST nvme_fio 00:10:18.817 ************************************ 00:10:18.817 ************************************ 00:10:18.817 END TEST nvme 00:10:18.817 ************************************ 00:10:18.817 00:10:18.817 real 1m36.069s 00:10:18.817 user 3m36.307s 00:10:18.817 sys 0m30.301s 00:10:18.817 04:01:05 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.817 04:01:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:18.817 04:01:05 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:18.817 04:01:05 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:18.817 04:01:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.817 04:01:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.817 04:01:05 -- common/autotest_common.sh@10 -- # set +x 00:10:18.817 ************************************ 00:10:18.817 START TEST nvme_scc 00:10:18.817 ************************************ 00:10:18.817 04:01:05 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:18.817 * Looking for test storage... 00:10:18.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:18.817 04:01:05 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:18.817 04:01:05 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:18.817 04:01:05 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:18.817 04:01:05 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:18.817 04:01:05 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.817 04:01:05 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:18.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.817 --rc genhtml_branch_coverage=1 00:10:18.817 --rc genhtml_function_coverage=1 00:10:18.817 --rc genhtml_legend=1 00:10:18.817 --rc geninfo_all_blocks=1 00:10:18.817 --rc geninfo_unexecuted_blocks=1 00:10:18.817 00:10:18.817 ' 00:10:18.817 04:01:05 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:18.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.817 --rc genhtml_branch_coverage=1 00:10:18.817 --rc genhtml_function_coverage=1 00:10:18.817 --rc genhtml_legend=1 00:10:18.817 --rc geninfo_all_blocks=1 00:10:18.817 --rc geninfo_unexecuted_blocks=1 00:10:18.817 00:10:18.817 ' 00:10:18.817 04:01:05 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:18.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.817 --rc genhtml_branch_coverage=1 00:10:18.817 --rc genhtml_function_coverage=1 00:10:18.817 --rc genhtml_legend=1 00:10:18.817 --rc geninfo_all_blocks=1 00:10:18.817 --rc geninfo_unexecuted_blocks=1 00:10:18.817 00:10:18.817 ' 00:10:18.817 04:01:05 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:18.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.817 --rc genhtml_branch_coverage=1 00:10:18.817 --rc genhtml_function_coverage=1 00:10:18.817 --rc genhtml_legend=1 00:10:18.817 --rc geninfo_all_blocks=1 00:10:18.817 --rc geninfo_unexecuted_blocks=1 00:10:18.817 00:10:18.817 ' 00:10:18.817 04:01:05 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:18.817 04:01:05 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:18.817 04:01:05 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:18.817 04:01:05 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:18.817 04:01:05 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.817 04:01:05 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.817 04:01:05 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.817 04:01:05 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.817 04:01:05 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.817 04:01:05 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:18.817 04:01:05 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.817 04:01:05 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:18.817 04:01:05 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:18.817 04:01:05 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:18.817 04:01:05 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:18.817 04:01:05 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:18.817 04:01:05 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:18.817 04:01:05 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:18.818 04:01:05 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:18.818 04:01:05 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:18.818 04:01:05 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.818 04:01:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:18.818 04:01:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:18.818 04:01:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:18.818 04:01:05 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:18.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:18.818 Waiting for block devices as requested 00:10:18.818 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.818 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.818 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.818 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.088 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:24.088 04:01:11 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:24.088 04:01:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:24.088 04:01:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:24.088 04:01:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:24.088 04:01:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:24.088 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:24.089 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.090 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.091 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.092 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.093 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.094 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.095 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.096 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:24.097 04:01:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:24.097 04:01:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:24.097 04:01:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:24.097 04:01:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.097 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:24.098 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:24.099 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:24.100 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:24.101 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.102 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.103 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.104 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:24.105 04:01:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:24.105 04:01:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:24.105 04:01:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:24.105 04:01:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:24.105 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:24.106 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:24.107 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.108 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:24.109 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.110 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:24.111 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:24.112 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:24.376 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.376 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.376 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.376 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:24.376 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.377 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.378 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:24.379 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.380 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:24.381 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.382 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.383 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:24.384 04:01:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:24.384 04:01:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:24.384 04:01:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:24.384 04:01:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:24.384 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.385 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:24.386 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:24.387 04:01:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:24.387 04:01:11 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:24.387 04:01:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:24.387 04:01:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:24.387 04:01:11 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:24.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:25.214 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.214 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.214 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.214 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.214 04:01:12 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:25.214 04:01:12 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:25.214 04:01:12 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.214 04:01:12 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:25.473 ************************************ 00:10:25.473 START TEST nvme_simple_copy 00:10:25.473 ************************************ 00:10:25.473 04:01:12 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:25.473 Initializing NVMe Controllers 00:10:25.473 Attaching to 0000:00:10.0 00:10:25.473 Controller supports SCC. Attached to 0000:00:10.0 00:10:25.473 Namespace ID: 1 size: 6GB 00:10:25.473 Initialization complete. 00:10:25.473 00:10:25.473 Controller QEMU NVMe Ctrl (12340 ) 00:10:25.473 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:25.473 Namespace Block Size:4096 00:10:25.473 Writing LBAs 0 to 63 with Random Data 00:10:25.473 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:25.473 LBAs matching Written Data: 64 00:10:25.733 00:10:25.733 real 0m0.256s 00:10:25.733 user 0m0.101s 00:10:25.733 sys 0m0.053s 00:10:25.733 04:01:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.733 ************************************ 00:10:25.733 END TEST nvme_simple_copy 00:10:25.733 ************************************ 00:10:25.733 04:01:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:25.733 ************************************ 00:10:25.733 END TEST nvme_scc 00:10:25.733 ************************************ 00:10:25.733 00:10:25.733 real 0m7.495s 00:10:25.733 user 0m1.064s 00:10:25.733 sys 0m1.308s 00:10:25.733 04:01:13 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.733 04:01:13 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:25.733 04:01:13 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:25.733 04:01:13 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:25.733 04:01:13 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:25.733 04:01:13 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:25.733 04:01:13 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:25.733 04:01:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.733 04:01:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.733 04:01:13 -- common/autotest_common.sh@10 -- # set +x 00:10:25.733 ************************************ 00:10:25.733 START TEST nvme_fdp 00:10:25.733 ************************************ 00:10:25.733 04:01:13 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:25.733 * Looking for test storage... 00:10:25.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:25.733 04:01:13 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:25.733 04:01:13 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:25.733 04:01:13 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:25.733 04:01:13 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.733 04:01:13 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.734 04:01:13 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:25.734 04:01:13 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.734 04:01:13 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:25.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.734 --rc genhtml_branch_coverage=1 00:10:25.734 --rc genhtml_function_coverage=1 00:10:25.734 --rc genhtml_legend=1 00:10:25.734 --rc geninfo_all_blocks=1 00:10:25.734 --rc geninfo_unexecuted_blocks=1 00:10:25.734 00:10:25.734 ' 00:10:25.734 04:01:13 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:25.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.734 --rc genhtml_branch_coverage=1 00:10:25.734 --rc genhtml_function_coverage=1 00:10:25.734 --rc genhtml_legend=1 00:10:25.734 --rc geninfo_all_blocks=1 00:10:25.734 --rc geninfo_unexecuted_blocks=1 00:10:25.734 00:10:25.734 ' 00:10:25.734 04:01:13 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:25.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.734 --rc genhtml_branch_coverage=1 00:10:25.734 --rc genhtml_function_coverage=1 00:10:25.734 --rc genhtml_legend=1 00:10:25.734 --rc geninfo_all_blocks=1 00:10:25.734 --rc geninfo_unexecuted_blocks=1 00:10:25.734 00:10:25.734 ' 00:10:25.734 04:01:13 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:25.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.734 --rc genhtml_branch_coverage=1 00:10:25.734 --rc genhtml_function_coverage=1 00:10:25.734 --rc genhtml_legend=1 00:10:25.734 --rc geninfo_all_blocks=1 00:10:25.734 --rc geninfo_unexecuted_blocks=1 00:10:25.734 00:10:25.734 ' 00:10:25.734 04:01:13 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.734 04:01:13 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.734 04:01:13 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.734 04:01:13 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.734 04:01:13 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.734 04:01:13 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.734 04:01:13 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.734 04:01:13 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.734 04:01:13 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:25.734 04:01:13 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:25.734 04:01:13 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:25.734 04:01:13 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.734 04:01:13 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:25.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:26.253 Waiting for block devices as requested 00:10:26.253 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:26.253 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:26.512 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:26.512 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.790 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:31.790 04:01:18 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:31.790 04:01:18 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:31.790 04:01:18 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:31.790 04:01:18 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:31.790 04:01:18 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.790 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:31.791 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:31.792 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.793 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.794 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.795 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:31.796 04:01:19 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:31.796 04:01:19 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:31.796 04:01:19 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:31.796 04:01:19 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:31.796 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.797 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:31.798 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:31.799 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.800 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.801 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.802 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:31.803 04:01:19 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:31.803 04:01:19 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:31.803 04:01:19 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:31.803 04:01:19 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.803 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.804 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.805 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.806 04:01:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:31.807 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.808 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.809 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.810 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.811 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:31.812 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.813 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:31.814 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.076 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:32.077 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:32.078 04:01:19 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:32.078 04:01:19 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:32.078 04:01:19 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:32.078 04:01:19 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:32.078 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.079 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.080 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:32.081 04:01:19 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:32.081 04:01:19 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:32.082 04:01:19 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:32.082 04:01:19 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:32.082 04:01:19 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:32.082 04:01:19 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:32.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:32.904 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.904 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.904 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.904 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.904 04:01:20 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:32.904 04:01:20 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:32.904 04:01:20 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.904 04:01:20 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:32.904 ************************************ 00:10:32.904 START TEST nvme_flexible_data_placement 00:10:32.904 ************************************ 00:10:32.904 04:01:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:33.162 Initializing NVMe Controllers 00:10:33.162 Attaching to 0000:00:13.0 00:10:33.162 Controller supports FDP Attached to 0000:00:13.0 00:10:33.162 Namespace ID: 1 Endurance Group ID: 1 00:10:33.162 Initialization complete. 00:10:33.162 00:10:33.162 ================================== 00:10:33.162 == FDP tests for Namespace: #01 == 00:10:33.162 ================================== 00:10:33.162 00:10:33.162 Get Feature: FDP: 00:10:33.162 ================= 00:10:33.162 Enabled: Yes 00:10:33.162 FDP configuration Index: 0 00:10:33.162 00:10:33.162 FDP configurations log page 00:10:33.162 =========================== 00:10:33.162 Number of FDP configurations: 1 00:10:33.162 Version: 0 00:10:33.162 Size: 112 00:10:33.162 FDP Configuration Descriptor: 0 00:10:33.162 Descriptor Size: 96 00:10:33.162 Reclaim Group Identifier format: 2 00:10:33.162 FDP Volatile Write Cache: Not Present 00:10:33.162 FDP Configuration: Valid 00:10:33.162 Vendor Specific Size: 0 00:10:33.162 Number of Reclaim Groups: 2 00:10:33.162 Number of Recalim Unit Handles: 8 00:10:33.162 Max Placement Identifiers: 128 00:10:33.162 Number of Namespaces Suppprted: 256 00:10:33.162 Reclaim unit Nominal Size: 6000000 bytes 00:10:33.162 Estimated Reclaim Unit Time Limit: Not Reported 00:10:33.162 RUH Desc #000: RUH Type: Initially Isolated 00:10:33.162 RUH Desc #001: RUH Type: Initially Isolated 00:10:33.162 RUH Desc #002: RUH Type: Initially Isolated 00:10:33.162 RUH Desc #003: RUH Type: Initially Isolated 00:10:33.162 RUH Desc #004: RUH Type: Initially Isolated 00:10:33.162 RUH Desc #005: RUH Type: Initially Isolated 00:10:33.162 RUH Desc #006: RUH Type: Initially Isolated 00:10:33.162 RUH Desc #007: RUH Type: Initially Isolated 00:10:33.162 00:10:33.162 FDP reclaim unit handle usage log page 00:10:33.162 ====================================== 00:10:33.162 Number of Reclaim Unit Handles: 8 00:10:33.162 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:33.162 RUH Usage Desc #001: RUH Attributes: Unused 00:10:33.162 RUH Usage Desc #002: RUH Attributes: Unused 00:10:33.162 RUH Usage Desc #003: RUH Attributes: Unused 00:10:33.162 RUH Usage Desc #004: RUH Attributes: Unused 00:10:33.162 RUH Usage Desc #005: RUH Attributes: Unused 00:10:33.162 RUH Usage Desc #006: RUH Attributes: Unused 00:10:33.162 RUH Usage Desc #007: RUH Attributes: Unused 00:10:33.162 00:10:33.162 FDP statistics log page 00:10:33.162 ======================= 00:10:33.162 Host bytes with metadata written: 1015291904 00:10:33.162 Media bytes with metadata written: 1015422976 00:10:33.162 Media bytes erased: 0 00:10:33.162 00:10:33.162 FDP Reclaim unit handle status 00:10:33.162 ============================== 00:10:33.162 Number of RUHS descriptors: 2 00:10:33.162 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000057be 00:10:33.162 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:33.162 00:10:33.162 FDP write on placement id: 0 success 00:10:33.162 00:10:33.162 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:33.162 00:10:33.162 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:33.162 00:10:33.162 Get Feature: FDP Events for Placement handle: #0 00:10:33.162 ======================== 00:10:33.162 Number of FDP Events: 6 00:10:33.162 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:33.162 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:33.162 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:33.162 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:33.162 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:33.162 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:33.162 00:10:33.162 FDP events log page 00:10:33.162 =================== 00:10:33.162 Number of FDP events: 1 00:10:33.162 FDP Event #0: 00:10:33.162 Event Type: RU Not Written to Capacity 00:10:33.163 Placement Identifier: Valid 00:10:33.163 NSID: Valid 00:10:33.163 Location: Valid 00:10:33.163 Placement Identifier: 0 00:10:33.163 Event Timestamp: 5 00:10:33.163 Namespace Identifier: 1 00:10:33.163 Reclaim Group Identifier: 0 00:10:33.163 Reclaim Unit Handle Identifier: 0 00:10:33.163 00:10:33.163 FDP test passed 00:10:33.163 00:10:33.163 real 0m0.231s 00:10:33.163 user 0m0.074s 00:10:33.163 sys 0m0.056s 00:10:33.163 04:01:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.163 04:01:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:33.163 ************************************ 00:10:33.163 END TEST nvme_flexible_data_placement 00:10:33.163 ************************************ 00:10:33.163 00:10:33.163 real 0m7.564s 00:10:33.163 user 0m1.091s 00:10:33.163 sys 0m1.336s 00:10:33.163 04:01:20 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.163 04:01:20 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:33.163 ************************************ 00:10:33.163 END TEST nvme_fdp 00:10:33.163 ************************************ 00:10:33.163 04:01:20 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:33.163 04:01:20 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:33.163 04:01:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:33.163 04:01:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.163 04:01:20 -- common/autotest_common.sh@10 -- # set +x 00:10:33.163 ************************************ 00:10:33.163 START TEST nvme_rpc 00:10:33.163 ************************************ 00:10:33.163 04:01:20 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:33.421 * Looking for test storage... 00:10:33.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:33.421 04:01:20 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:33.421 04:01:20 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:33.421 04:01:20 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:33.421 04:01:20 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.421 04:01:20 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:33.422 04:01:20 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:33.422 04:01:20 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.422 04:01:20 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:33.422 04:01:20 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.422 04:01:20 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:33.422 04:01:20 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:33.422 04:01:20 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.422 04:01:20 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:33.422 04:01:20 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.422 04:01:20 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.422 04:01:20 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.422 04:01:20 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:33.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.422 --rc genhtml_branch_coverage=1 00:10:33.422 --rc genhtml_function_coverage=1 00:10:33.422 --rc genhtml_legend=1 00:10:33.422 --rc geninfo_all_blocks=1 00:10:33.422 --rc geninfo_unexecuted_blocks=1 00:10:33.422 00:10:33.422 ' 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:33.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.422 --rc genhtml_branch_coverage=1 00:10:33.422 --rc genhtml_function_coverage=1 00:10:33.422 --rc genhtml_legend=1 00:10:33.422 --rc geninfo_all_blocks=1 00:10:33.422 --rc geninfo_unexecuted_blocks=1 00:10:33.422 00:10:33.422 ' 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:33.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.422 --rc genhtml_branch_coverage=1 00:10:33.422 --rc genhtml_function_coverage=1 00:10:33.422 --rc genhtml_legend=1 00:10:33.422 --rc geninfo_all_blocks=1 00:10:33.422 --rc geninfo_unexecuted_blocks=1 00:10:33.422 00:10:33.422 ' 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:33.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.422 --rc genhtml_branch_coverage=1 00:10:33.422 --rc genhtml_function_coverage=1 00:10:33.422 --rc genhtml_legend=1 00:10:33.422 --rc geninfo_all_blocks=1 00:10:33.422 --rc geninfo_unexecuted_blocks=1 00:10:33.422 00:10:33.422 ' 00:10:33.422 04:01:20 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.422 04:01:20 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:33.422 04:01:20 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:33.422 04:01:20 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:33.422 04:01:20 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65757 00:10:33.422 04:01:20 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:33.422 04:01:20 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65757 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65757 ']' 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.422 04:01:20 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.422 [2024-12-06 04:01:20.940306] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:10:33.422 [2024-12-06 04:01:20.940451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65757 ] 00:10:33.680 [2024-12-06 04:01:21.105162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:33.938 [2024-12-06 04:01:21.208210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.938 [2024-12-06 04:01:21.208289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.506 04:01:21 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.506 04:01:21 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:34.506 04:01:21 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:34.789 Nvme0n1 00:10:34.789 04:01:22 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:34.789 04:01:22 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:34.789 request: 00:10:34.789 { 00:10:34.789 "bdev_name": "Nvme0n1", 00:10:34.789 "filename": "non_existing_file", 00:10:34.789 "method": "bdev_nvme_apply_firmware", 00:10:34.789 "req_id": 1 00:10:34.789 } 00:10:34.789 Got JSON-RPC error response 00:10:34.789 response: 00:10:34.789 { 00:10:34.789 "code": -32603, 00:10:34.789 "message": "open file failed." 00:10:34.789 } 00:10:34.789 04:01:22 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:34.789 04:01:22 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:34.789 04:01:22 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:35.046 04:01:22 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:35.046 04:01:22 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65757 00:10:35.046 04:01:22 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65757 ']' 00:10:35.046 04:01:22 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65757 00:10:35.046 04:01:22 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:35.046 04:01:22 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.046 04:01:22 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65757 00:10:35.046 04:01:22 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.046 killing process with pid 65757 00:10:35.046 04:01:22 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.046 04:01:22 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65757' 00:10:35.046 04:01:22 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65757 00:10:35.046 04:01:22 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65757 00:10:36.416 ************************************ 00:10:36.416 END TEST nvme_rpc 00:10:36.416 ************************************ 00:10:36.416 00:10:36.416 real 0m3.196s 00:10:36.416 user 0m6.156s 00:10:36.416 sys 0m0.490s 00:10:36.416 04:01:23 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.416 04:01:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.416 04:01:23 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:36.416 04:01:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:36.416 04:01:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.416 04:01:23 -- common/autotest_common.sh@10 -- # set +x 00:10:36.416 ************************************ 00:10:36.416 START TEST nvme_rpc_timeouts 00:10:36.416 ************************************ 00:10:36.416 04:01:23 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:36.674 * Looking for test storage... 00:10:36.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:36.675 04:01:23 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:36.675 04:01:23 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:36.675 04:01:23 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:10:36.675 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.675 04:01:24 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:36.675 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.675 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:36.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.675 --rc genhtml_branch_coverage=1 00:10:36.675 --rc genhtml_function_coverage=1 00:10:36.675 --rc genhtml_legend=1 00:10:36.675 --rc geninfo_all_blocks=1 00:10:36.675 --rc geninfo_unexecuted_blocks=1 00:10:36.675 00:10:36.675 ' 00:10:36.675 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:36.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.675 --rc genhtml_branch_coverage=1 00:10:36.675 --rc genhtml_function_coverage=1 00:10:36.675 --rc genhtml_legend=1 00:10:36.675 --rc geninfo_all_blocks=1 00:10:36.675 --rc geninfo_unexecuted_blocks=1 00:10:36.675 00:10:36.675 ' 00:10:36.675 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:36.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.675 --rc genhtml_branch_coverage=1 00:10:36.675 --rc genhtml_function_coverage=1 00:10:36.675 --rc genhtml_legend=1 00:10:36.675 --rc geninfo_all_blocks=1 00:10:36.675 --rc geninfo_unexecuted_blocks=1 00:10:36.675 00:10:36.675 ' 00:10:36.675 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:36.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.675 --rc genhtml_branch_coverage=1 00:10:36.675 --rc genhtml_function_coverage=1 00:10:36.675 --rc genhtml_legend=1 00:10:36.675 --rc geninfo_all_blocks=1 00:10:36.675 --rc geninfo_unexecuted_blocks=1 00:10:36.675 00:10:36.675 ' 00:10:36.675 04:01:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:36.675 04:01:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65822 00:10:36.675 04:01:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65822 00:10:36.675 04:01:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65854 00:10:36.675 04:01:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:36.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.675 04:01:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65854 00:10:36.675 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65854 ']' 00:10:36.675 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.675 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.675 04:01:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:36.675 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.675 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.675 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:36.675 [2024-12-06 04:01:24.115326] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:10:36.675 [2024-12-06 04:01:24.115442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65854 ] 00:10:36.932 [2024-12-06 04:01:24.261453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:36.932 [2024-12-06 04:01:24.341493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.932 [2024-12-06 04:01:24.341567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.496 Checking default timeout settings: 00:10:37.496 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.496 04:01:24 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:37.496 04:01:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:37.496 04:01:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:37.755 Making settings changes with rpc: 00:10:37.756 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:37.756 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:38.013 Check default vs. modified settings: 00:10:38.013 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:38.013 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:38.270 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:38.270 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:38.270 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65822 00:10:38.270 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:38.270 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:38.527 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:38.527 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:38.527 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65822 00:10:38.527 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:38.527 Setting action_on_timeout is changed as expected. 00:10:38.527 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:38.527 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65822 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65822 00:10:38.528 Setting timeout_us is changed as expected. 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65822 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65822 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:38.528 Setting timeout_admin_us is changed as expected. 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65822 /tmp/settings_modified_65822 00:10:38.528 04:01:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65854 00:10:38.528 04:01:25 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65854 ']' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65854 00:10:38.528 04:01:25 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:38.528 04:01:25 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65854 00:10:38.528 killing process with pid 65854 00:10:38.528 04:01:25 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.528 04:01:25 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65854' 00:10:38.528 04:01:25 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65854 00:10:38.528 04:01:25 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65854 00:10:39.903 RPC TIMEOUT SETTING TEST PASSED. 00:10:39.903 04:01:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:39.903 ************************************ 00:10:39.903 END TEST nvme_rpc_timeouts 00:10:39.903 ************************************ 00:10:39.903 00:10:39.903 real 0m3.110s 00:10:39.903 user 0m6.119s 00:10:39.903 sys 0m0.459s 00:10:39.903 04:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.903 04:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:39.903 04:01:27 -- spdk/autotest.sh@239 -- # uname -s 00:10:39.903 04:01:27 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:39.903 04:01:27 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:39.903 04:01:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.903 04:01:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.903 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:10:39.903 ************************************ 00:10:39.903 START TEST sw_hotplug 00:10:39.903 ************************************ 00:10:39.903 04:01:27 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:39.903 * Looking for test storage... 00:10:39.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:39.903 04:01:27 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:39.903 04:01:27 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:39.903 04:01:27 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:10:39.903 04:01:27 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:39.903 04:01:27 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.904 04:01:27 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:39.904 04:01:27 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.904 04:01:27 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:39.904 04:01:27 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:39.904 04:01:27 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.904 04:01:27 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:39.904 04:01:27 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.904 04:01:27 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.904 04:01:27 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.904 04:01:27 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:39.904 04:01:27 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.904 04:01:27 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:39.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.904 --rc genhtml_branch_coverage=1 00:10:39.904 --rc genhtml_function_coverage=1 00:10:39.904 --rc genhtml_legend=1 00:10:39.904 --rc geninfo_all_blocks=1 00:10:39.904 --rc geninfo_unexecuted_blocks=1 00:10:39.904 00:10:39.904 ' 00:10:39.904 04:01:27 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:39.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.904 --rc genhtml_branch_coverage=1 00:10:39.904 --rc genhtml_function_coverage=1 00:10:39.904 --rc genhtml_legend=1 00:10:39.904 --rc geninfo_all_blocks=1 00:10:39.904 --rc geninfo_unexecuted_blocks=1 00:10:39.904 00:10:39.904 ' 00:10:39.904 04:01:27 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:39.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.904 --rc genhtml_branch_coverage=1 00:10:39.904 --rc genhtml_function_coverage=1 00:10:39.904 --rc genhtml_legend=1 00:10:39.904 --rc geninfo_all_blocks=1 00:10:39.904 --rc geninfo_unexecuted_blocks=1 00:10:39.904 00:10:39.904 ' 00:10:39.904 04:01:27 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:39.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.904 --rc genhtml_branch_coverage=1 00:10:39.904 --rc genhtml_function_coverage=1 00:10:39.904 --rc genhtml_legend=1 00:10:39.904 --rc geninfo_all_blocks=1 00:10:39.904 --rc geninfo_unexecuted_blocks=1 00:10:39.904 00:10:39.904 ' 00:10:39.904 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:40.162 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:40.162 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:40.162 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:40.162 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:40.162 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:40.162 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:40.162 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:40.162 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:40.162 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:40.162 04:01:27 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:40.163 04:01:27 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:40.163 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:40.163 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:40.163 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:40.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:40.728 Waiting for block devices as requested 00:10:40.728 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:40.728 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:40.728 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:40.986 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:46.247 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:46.247 04:01:33 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:46.247 04:01:33 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:46.247 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:46.506 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:46.506 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:46.506 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:46.764 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.764 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:47.022 04:01:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66707 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:47.022 04:01:34 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:47.022 04:01:34 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:47.022 04:01:34 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:47.022 04:01:34 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:47.022 04:01:34 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:47.022 04:01:34 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:47.281 Initializing NVMe Controllers 00:10:47.281 Attaching to 0000:00:10.0 00:10:47.281 Attaching to 0000:00:11.0 00:10:47.281 Attached to 0000:00:10.0 00:10:47.281 Attached to 0000:00:11.0 00:10:47.281 Initialization complete. Starting I/O... 00:10:47.281 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:47.281 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:47.281 00:10:48.214 QEMU NVMe Ctrl (12340 ): 2737 I/Os completed (+2737) 00:10:48.214 QEMU NVMe Ctrl (12341 ): 2733 I/Os completed (+2733) 00:10:48.214 00:10:49.148 QEMU NVMe Ctrl (12340 ): 6156 I/Os completed (+3419) 00:10:49.148 QEMU NVMe Ctrl (12341 ): 6109 I/Os completed (+3376) 00:10:49.148 00:10:50.086 QEMU NVMe Ctrl (12340 ): 9216 I/Os completed (+3060) 00:10:50.086 QEMU NVMe Ctrl (12341 ): 9177 I/Os completed (+3068) 00:10:50.086 00:10:51.463 QEMU NVMe Ctrl (12340 ): 12448 I/Os completed (+3232) 00:10:51.463 QEMU NVMe Ctrl (12341 ): 12403 I/Os completed (+3226) 00:10:51.463 00:10:52.493 QEMU NVMe Ctrl (12340 ): 16132 I/Os completed (+3684) 00:10:52.493 QEMU NVMe Ctrl (12341 ): 16078 I/Os completed (+3675) 00:10:52.493 00:10:53.090 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:53.090 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.090 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.090 [2024-12-06 04:01:40.379229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:53.090 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:53.090 [2024-12-06 04:01:40.380227] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.090 [2024-12-06 04:01:40.380266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.090 [2024-12-06 04:01:40.380280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.090 [2024-12-06 04:01:40.380295] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.090 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:53.090 [2024-12-06 04:01:40.381913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.090 [2024-12-06 04:01:40.382009] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.090 [2024-12-06 04:01:40.382050] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.090 [2024-12-06 04:01:40.382123] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.091 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.091 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.091 [2024-12-06 04:01:40.400094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:53.091 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:53.091 [2024-12-06 04:01:40.401078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.091 [2024-12-06 04:01:40.401171] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.091 [2024-12-06 04:01:40.401204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.091 [2024-12-06 04:01:40.401264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.091 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:53.091 [2024-12-06 04:01:40.402766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.091 [2024-12-06 04:01:40.402855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.091 [2024-12-06 04:01:40.402921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.091 [2024-12-06 04:01:40.402946] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.091 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:53.091 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:53.091 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:53.091 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:53.091 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:53.091 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:53.091 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:53.091 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:53.091 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:53.091 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:53.091 Attaching to 0000:00:10.0 00:10:53.091 Attached to 0000:00:10.0 00:10:53.091 QEMU NVMe Ctrl (12340 ): 44 I/Os completed (+44) 00:10:53.091 00:10:53.350 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:53.350 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:53.350 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:53.350 Attaching to 0000:00:11.0 00:10:53.350 Attached to 0000:00:11.0 00:10:54.284 QEMU NVMe Ctrl (12340 ): 3720 I/Os completed (+3676) 00:10:54.284 QEMU NVMe Ctrl (12341 ): 3460 I/Os completed (+3460) 00:10:54.284 00:10:55.220 QEMU NVMe Ctrl (12340 ): 6798 I/Os completed (+3078) 00:10:55.220 QEMU NVMe Ctrl (12341 ): 6553 I/Os completed (+3093) 00:10:55.220 00:10:56.157 QEMU NVMe Ctrl (12340 ): 9902 I/Os completed (+3104) 00:10:56.157 QEMU NVMe Ctrl (12341 ): 9626 I/Os completed (+3073) 00:10:56.157 00:10:57.092 QEMU NVMe Ctrl (12340 ): 13066 I/Os completed (+3164) 00:10:57.092 QEMU NVMe Ctrl (12341 ): 12793 I/Os completed (+3167) 00:10:57.092 00:10:58.466 QEMU NVMe Ctrl (12340 ): 16699 I/Os completed (+3633) 00:10:58.466 QEMU NVMe Ctrl (12341 ): 16422 I/Os completed (+3629) 00:10:58.466 00:10:59.402 QEMU NVMe Ctrl (12340 ): 20342 I/Os completed (+3643) 00:10:59.402 QEMU NVMe Ctrl (12341 ): 20079 I/Os completed (+3657) 00:10:59.402 00:11:00.340 QEMU NVMe Ctrl (12340 ): 23957 I/Os completed (+3615) 00:11:00.340 QEMU NVMe Ctrl (12341 ): 23713 I/Os completed (+3634) 00:11:00.340 00:11:01.281 QEMU NVMe Ctrl (12340 ): 27515 I/Os completed (+3558) 00:11:01.281 QEMU NVMe Ctrl (12341 ): 27249 I/Os completed (+3536) 00:11:01.281 00:11:02.262 QEMU NVMe Ctrl (12340 ): 30583 I/Os completed (+3068) 00:11:02.262 QEMU NVMe Ctrl (12341 ): 30298 I/Os completed (+3049) 00:11:02.262 00:11:03.197 QEMU NVMe Ctrl (12340 ): 33648 I/Os completed (+3065) 00:11:03.197 QEMU NVMe Ctrl (12341 ): 33401 I/Os completed (+3103) 00:11:03.197 00:11:04.132 QEMU NVMe Ctrl (12340 ): 36680 I/Os completed (+3032) 00:11:04.132 QEMU NVMe Ctrl (12341 ): 36455 I/Os completed (+3054) 00:11:04.132 00:11:05.067 QEMU NVMe Ctrl (12340 ): 40020 I/Os completed (+3340) 00:11:05.068 QEMU NVMe Ctrl (12341 ): 39802 I/Os completed (+3347) 00:11:05.068 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:05.357 [2024-12-06 04:01:52.639117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:05.357 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:05.357 [2024-12-06 04:01:52.640187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 [2024-12-06 04:01:52.640248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 [2024-12-06 04:01:52.640276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 [2024-12-06 04:01:52.640292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:05.357 [2024-12-06 04:01:52.641901] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 [2024-12-06 04:01:52.641935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 [2024-12-06 04:01:52.641947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 [2024-12-06 04:01:52.641959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:05.357 [2024-12-06 04:01:52.660796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:05.357 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:05.357 [2024-12-06 04:01:52.661622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 [2024-12-06 04:01:52.661660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 [2024-12-06 04:01:52.661676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 [2024-12-06 04:01:52.661688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:05.357 [2024-12-06 04:01:52.663143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 [2024-12-06 04:01:52.663229] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 [2024-12-06 04:01:52.663259] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 [2024-12-06 04:01:52.663296] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.357 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:05.357 Attaching to 0000:00:10.0 00:11:05.357 Attached to 0000:00:10.0 00:11:05.615 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:05.615 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.615 04:01:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:05.616 Attaching to 0000:00:11.0 00:11:05.616 Attached to 0000:00:11.0 00:11:06.183 QEMU NVMe Ctrl (12340 ): 2706 I/Os completed (+2706) 00:11:06.183 QEMU NVMe Ctrl (12341 ): 2349 I/Os completed (+2349) 00:11:06.183 00:11:07.117 QEMU NVMe Ctrl (12340 ): 6308 I/Os completed (+3602) 00:11:07.117 QEMU NVMe Ctrl (12341 ): 5948 I/Os completed (+3599) 00:11:07.117 00:11:08.051 QEMU NVMe Ctrl (12340 ): 9940 I/Os completed (+3632) 00:11:08.051 QEMU NVMe Ctrl (12341 ): 9576 I/Os completed (+3628) 00:11:08.051 00:11:09.425 QEMU NVMe Ctrl (12340 ): 13542 I/Os completed (+3602) 00:11:09.425 QEMU NVMe Ctrl (12341 ): 13186 I/Os completed (+3610) 00:11:09.425 00:11:10.360 QEMU NVMe Ctrl (12340 ): 17190 I/Os completed (+3648) 00:11:10.360 QEMU NVMe Ctrl (12341 ): 16876 I/Os completed (+3690) 00:11:10.360 00:11:11.294 QEMU NVMe Ctrl (12340 ): 20247 I/Os completed (+3057) 00:11:11.294 QEMU NVMe Ctrl (12341 ): 19936 I/Os completed (+3060) 00:11:11.294 00:11:12.228 QEMU NVMe Ctrl (12340 ): 23289 I/Os completed (+3042) 00:11:12.228 QEMU NVMe Ctrl (12341 ): 22982 I/Os completed (+3046) 00:11:12.228 00:11:13.188 QEMU NVMe Ctrl (12340 ): 26379 I/Os completed (+3090) 00:11:13.188 QEMU NVMe Ctrl (12341 ): 26065 I/Os completed (+3083) 00:11:13.188 00:11:14.122 QEMU NVMe Ctrl (12340 ): 29932 I/Os completed (+3553) 00:11:14.122 QEMU NVMe Ctrl (12341 ): 29599 I/Os completed (+3534) 00:11:14.122 00:11:15.057 QEMU NVMe Ctrl (12340 ): 33542 I/Os completed (+3610) 00:11:15.057 QEMU NVMe Ctrl (12341 ): 33204 I/Os completed (+3605) 00:11:15.057 00:11:16.433 QEMU NVMe Ctrl (12340 ): 37185 I/Os completed (+3643) 00:11:16.433 QEMU NVMe Ctrl (12341 ): 36848 I/Os completed (+3644) 00:11:16.433 00:11:17.367 QEMU NVMe Ctrl (12340 ): 40430 I/Os completed (+3245) 00:11:17.368 QEMU NVMe Ctrl (12341 ): 40039 I/Os completed (+3191) 00:11:17.368 00:11:17.626 04:02:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:17.626 04:02:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.627 04:02:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.627 04:02:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.627 [2024-12-06 04:02:04.923390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:17.627 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:17.627 [2024-12-06 04:02:04.924625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 [2024-12-06 04:02:04.924775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 [2024-12-06 04:02:04.924814] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 [2024-12-06 04:02:04.924896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:17.627 [2024-12-06 04:02:04.926900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 [2024-12-06 04:02:04.926967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 [2024-12-06 04:02:04.926996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 [2024-12-06 04:02:04.927070] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 04:02:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.627 04:02:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.627 [2024-12-06 04:02:04.946970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:17.627 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:17.627 [2024-12-06 04:02:04.948068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 [2024-12-06 04:02:04.948191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 [2024-12-06 04:02:04.948228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 [2024-12-06 04:02:04.948300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:17.627 [2024-12-06 04:02:04.950076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 [2024-12-06 04:02:04.950176] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 [2024-12-06 04:02:04.950212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 [2024-12-06 04:02:04.950283] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.627 04:02:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:17.627 04:02:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:17.627 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:17.627 EAL: Scan for (pci) bus failed. 00:11:17.627 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.627 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.627 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:17.627 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:17.627 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.627 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.627 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.627 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:17.627 Attaching to 0000:00:10.0 00:11:17.627 Attached to 0000:00:10.0 00:11:17.885 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:17.885 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.885 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:17.885 Attaching to 0000:00:11.0 00:11:17.885 Attached to 0000:00:11.0 00:11:17.885 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:17.885 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:17.885 [2024-12-06 04:02:05.181781] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:30.130 04:02:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:30.130 04:02:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.130 04:02:17 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.80 00:11:30.130 04:02:17 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.80 00:11:30.130 04:02:17 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:30.130 04:02:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.80 00:11:30.130 04:02:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.80 2 00:11:30.130 remove_attach_helper took 42.80s to complete (handling 2 nvme drive(s)) 04:02:17 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:36.691 04:02:23 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66707 00:11:36.691 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66707) - No such process 00:11:36.691 04:02:23 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66707 00:11:36.691 04:02:23 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:36.691 04:02:23 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:36.691 04:02:23 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:36.691 04:02:23 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67259 00:11:36.691 04:02:23 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:36.691 04:02:23 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:36.691 04:02:23 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67259 00:11:36.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.691 04:02:23 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67259 ']' 00:11:36.691 04:02:23 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.691 04:02:23 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.691 04:02:23 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.691 04:02:23 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.691 04:02:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.691 [2024-12-06 04:02:23.260378] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:11:36.691 [2024-12-06 04:02:23.260930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67259 ] 00:11:36.691 [2024-12-06 04:02:23.421134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.691 [2024-12-06 04:02:23.513962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.691 04:02:24 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.691 04:02:24 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:36.691 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:36.691 04:02:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.691 04:02:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.691 04:02:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.691 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:36.691 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:36.691 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:36.691 04:02:24 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:36.691 04:02:24 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:36.691 04:02:24 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:36.691 04:02:24 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:36.691 04:02:24 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:36.691 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:36.691 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:36.691 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:36.691 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:36.691 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.301 04:02:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.301 04:02:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.301 04:02:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:43.301 [2024-12-06 04:02:30.199841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:43.301 [2024-12-06 04:02:30.201272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.301 [2024-12-06 04:02:30.201308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.301 [2024-12-06 04:02:30.201321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.301 [2024-12-06 04:02:30.201338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.301 [2024-12-06 04:02:30.201345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.301 [2024-12-06 04:02:30.201354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.301 [2024-12-06 04:02:30.201361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.301 [2024-12-06 04:02:30.201368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.301 [2024-12-06 04:02:30.201374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.301 [2024-12-06 04:02:30.201385] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.301 [2024-12-06 04:02:30.201392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.301 [2024-12-06 04:02:30.201400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.301 04:02:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.301 04:02:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.301 [2024-12-06 04:02:30.699830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:43.301 [2024-12-06 04:02:30.701222] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.301 [2024-12-06 04:02:30.701251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.301 [2024-12-06 04:02:30.701262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.301 [2024-12-06 04:02:30.701278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.301 [2024-12-06 04:02:30.701286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.301 [2024-12-06 04:02:30.701293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.301 [2024-12-06 04:02:30.701302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.301 [2024-12-06 04:02:30.701309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.301 [2024-12-06 04:02:30.701317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.301 [2024-12-06 04:02:30.701324] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.301 [2024-12-06 04:02:30.701332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.301 [2024-12-06 04:02:30.701339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.301 04:02:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:43.301 04:02:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:43.866 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:43.866 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.866 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.866 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.867 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.867 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.867 04:02:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.867 04:02:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.867 04:02:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.867 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:43.867 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:43.867 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.867 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.867 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:44.124 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:44.124 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.124 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:44.124 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:44.124 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:44.124 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:44.124 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.124 04:02:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.314 04:02:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.314 04:02:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.314 04:02:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.314 04:02:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.314 04:02:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.314 04:02:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:56.314 04:02:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:56.314 [2024-12-06 04:02:43.600006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:56.314 [2024-12-06 04:02:43.601374] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.314 [2024-12-06 04:02:43.601411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.314 [2024-12-06 04:02:43.601423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.314 [2024-12-06 04:02:43.601441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.314 [2024-12-06 04:02:43.601448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.314 [2024-12-06 04:02:43.601457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.314 [2024-12-06 04:02:43.601464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.314 [2024-12-06 04:02:43.601472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.314 [2024-12-06 04:02:43.601478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.314 [2024-12-06 04:02:43.601487] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.314 [2024-12-06 04:02:43.601493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.314 [2024-12-06 04:02:43.601501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.573 [2024-12-06 04:02:44.000010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:56.573 [2024-12-06 04:02:44.001331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.573 [2024-12-06 04:02:44.001365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.573 [2024-12-06 04:02:44.001379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.573 [2024-12-06 04:02:44.001394] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.573 [2024-12-06 04:02:44.001403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.573 [2024-12-06 04:02:44.001411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.573 [2024-12-06 04:02:44.001419] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.573 [2024-12-06 04:02:44.001425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.573 [2024-12-06 04:02:44.001433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.573 [2024-12-06 04:02:44.001440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.573 [2024-12-06 04:02:44.001448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.573 [2024-12-06 04:02:44.001454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.573 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:56.573 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.831 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.832 04:02:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.832 04:02:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.832 04:02:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:56.832 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:57.090 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:57.090 04:02:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.286 04:02:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.286 04:02:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.286 04:02:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.286 04:02:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.286 04:02:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.286 04:02:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.286 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:09.287 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:09.287 [2024-12-06 04:02:56.500167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:09.287 [2024-12-06 04:02:56.501456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.287 [2024-12-06 04:02:56.501492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.287 [2024-12-06 04:02:56.501503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.287 [2024-12-06 04:02:56.501520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.287 [2024-12-06 04:02:56.501527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.287 [2024-12-06 04:02:56.501538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.287 [2024-12-06 04:02:56.501546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.287 [2024-12-06 04:02:56.501554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.287 [2024-12-06 04:02:56.501561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.287 [2024-12-06 04:02:56.501568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.287 [2024-12-06 04:02:56.501575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.287 [2024-12-06 04:02:56.501583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.545 [2024-12-06 04:02:56.900169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:09.545 [2024-12-06 04:02:56.901436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.545 [2024-12-06 04:02:56.901556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.545 [2024-12-06 04:02:56.901573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.545 [2024-12-06 04:02:56.901588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.545 [2024-12-06 04:02:56.901596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.545 [2024-12-06 04:02:56.901603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.545 [2024-12-06 04:02:56.901613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.545 [2024-12-06 04:02:56.901620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.545 [2024-12-06 04:02:56.901629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.545 [2024-12-06 04:02:56.901636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.545 [2024-12-06 04:02:56.901644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.545 [2024-12-06 04:02:56.901651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.545 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:09.545 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:09.545 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:09.545 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.545 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.545 04:02:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.545 04:02:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.545 04:02:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.545 04:02:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.545 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:09.545 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:09.803 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:09.803 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:09.803 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:09.803 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:09.803 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:09.803 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:09.803 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:09.803 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:09.804 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:09.804 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:09.804 04:02:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.17 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.17 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:12:22.051 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:22.051 04:03:09 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:22.051 04:03:09 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.637 04:03:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.637 04:03:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.637 04:03:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:28.637 [2024-12-06 04:03:15.399670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:28.637 [2024-12-06 04:03:15.400851] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.637 [2024-12-06 04:03:15.400885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.637 [2024-12-06 04:03:15.400896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.637 [2024-12-06 04:03:15.400913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.637 [2024-12-06 04:03:15.400921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.637 [2024-12-06 04:03:15.400930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.637 [2024-12-06 04:03:15.400937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.637 [2024-12-06 04:03:15.400945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.637 [2024-12-06 04:03:15.400951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.637 [2024-12-06 04:03:15.400960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.637 [2024-12-06 04:03:15.400966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.637 [2024-12-06 04:03:15.400978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.637 [2024-12-06 04:03:15.799680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:28.637 [2024-12-06 04:03:15.802223] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.637 [2024-12-06 04:03:15.802255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.637 [2024-12-06 04:03:15.802266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.637 [2024-12-06 04:03:15.802281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.637 [2024-12-06 04:03:15.802290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.637 [2024-12-06 04:03:15.802297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.637 [2024-12-06 04:03:15.802306] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.637 [2024-12-06 04:03:15.802313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.637 [2024-12-06 04:03:15.802323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.637 [2024-12-06 04:03:15.802330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.637 [2024-12-06 04:03:15.802337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.637 [2024-12-06 04:03:15.802344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.637 04:03:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.637 04:03:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.637 04:03:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:28.637 04:03:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:28.637 04:03:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:28.637 04:03:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:28.638 04:03:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:28.638 04:03:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:28.638 04:03:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:28.638 04:03:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:28.638 04:03:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:28.638 04:03:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:40.927 04:03:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.927 04:03:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:40.927 04:03:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:40.927 [2024-12-06 04:03:28.199904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:40.927 [2024-12-06 04:03:28.201176] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.927 [2024-12-06 04:03:28.201310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.927 [2024-12-06 04:03:28.201376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.927 [2024-12-06 04:03:28.201436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.927 [2024-12-06 04:03:28.201485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.927 [2024-12-06 04:03:28.201535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.927 [2024-12-06 04:03:28.201564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.927 [2024-12-06 04:03:28.201582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.927 [2024-12-06 04:03:28.201634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.927 [2024-12-06 04:03:28.201680] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.927 [2024-12-06 04:03:28.201697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.927 [2024-12-06 04:03:28.201737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:40.927 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:40.928 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:40.928 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:40.928 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:40.928 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:40.928 04:03:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.928 04:03:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:40.928 04:03:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.928 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:40.928 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:41.188 [2024-12-06 04:03:28.599912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:41.188 [2024-12-06 04:03:28.601160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.188 [2024-12-06 04:03:28.601192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.188 [2024-12-06 04:03:28.601204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.188 [2024-12-06 04:03:28.601220] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.188 [2024-12-06 04:03:28.601231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.188 [2024-12-06 04:03:28.601239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.188 [2024-12-06 04:03:28.601248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.188 [2024-12-06 04:03:28.601254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.188 [2024-12-06 04:03:28.601262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.188 [2024-12-06 04:03:28.601270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.188 [2024-12-06 04:03:28.601279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.188 [2024-12-06 04:03:28.601285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.447 04:03:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.447 04:03:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.447 04:03:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.447 04:03:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:41.707 04:03:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:41.707 04:03:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:41.707 04:03:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:53.953 04:03:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.953 04:03:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.953 04:03:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:53.953 [2024-12-06 04:03:41.100170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:53.953 [2024-12-06 04:03:41.101372] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.953 [2024-12-06 04:03:41.101469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.953 [2024-12-06 04:03:41.101528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.953 [2024-12-06 04:03:41.101590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.953 [2024-12-06 04:03:41.101610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.953 [2024-12-06 04:03:41.101635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.953 [2024-12-06 04:03:41.101783] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.953 [2024-12-06 04:03:41.101808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.953 [2024-12-06 04:03:41.101903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.953 [2024-12-06 04:03:41.101933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.953 [2024-12-06 04:03:41.101983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.953 [2024-12-06 04:03:41.102011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:53.953 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:53.954 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:53.954 04:03:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.954 04:03:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.954 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:53.954 04:03:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.954 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:53.954 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:54.214 [2024-12-06 04:03:41.500167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:54.214 [2024-12-06 04:03:41.501298] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.214 [2024-12-06 04:03:41.501330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.214 [2024-12-06 04:03:41.501342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.214 [2024-12-06 04:03:41.501354] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.214 [2024-12-06 04:03:41.501363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.214 [2024-12-06 04:03:41.501370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.214 [2024-12-06 04:03:41.501379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.214 [2024-12-06 04:03:41.501386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.214 [2024-12-06 04:03:41.501394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.214 [2024-12-06 04:03:41.501401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.214 [2024-12-06 04:03:41.501411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.214 [2024-12-06 04:03:41.501417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.214 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:54.214 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:54.214 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:54.214 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:54.214 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:54.214 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:54.214 04:03:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.214 04:03:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.214 04:03:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.214 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:54.214 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:54.474 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:54.474 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:54.474 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:54.474 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:54.474 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:54.474 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:54.474 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:54.474 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:54.474 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:54.474 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:54.474 04:03:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:06.715 04:03:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:06.715 04:03:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:06.715 04:03:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:06.715 04:03:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:06.715 04:03:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:06.715 04:03:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:06.715 04:03:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.715 04:03:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:06.715 04:03:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.715 04:03:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:06.715 04:03:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:06.715 04:03:53 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.64 00:13:06.715 04:03:53 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.64 00:13:06.715 04:03:53 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:06.715 04:03:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:13:06.715 04:03:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:13:06.715 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 04:03:53 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:06.715 04:03:53 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67259 00:13:06.715 04:03:53 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67259 ']' 00:13:06.715 04:03:53 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67259 00:13:06.715 04:03:53 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:13:06.715 04:03:53 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.715 04:03:53 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67259 00:13:06.715 04:03:53 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.716 04:03:53 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.716 04:03:53 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67259' 00:13:06.716 killing process with pid 67259 00:13:06.716 04:03:53 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67259 00:13:06.716 04:03:53 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67259 00:13:07.663 04:03:55 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:08.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:08.496 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:08.496 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:08.496 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.758 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.758 00:13:08.758 real 2m29.068s 00:13:08.758 user 1m51.057s 00:13:08.758 sys 0m16.624s 00:13:08.758 04:03:56 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.758 04:03:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:08.758 ************************************ 00:13:08.758 END TEST sw_hotplug 00:13:08.758 ************************************ 00:13:08.758 04:03:56 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:08.758 04:03:56 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:08.758 04:03:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:08.758 04:03:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.758 04:03:56 -- common/autotest_common.sh@10 -- # set +x 00:13:08.758 ************************************ 00:13:08.758 START TEST nvme_xnvme 00:13:08.758 ************************************ 00:13:08.758 04:03:56 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:08.758 * Looking for test storage... 00:13:08.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:08.758 04:03:56 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:08.758 04:03:56 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:08.758 04:03:56 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.024 04:03:56 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:09.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.024 --rc genhtml_branch_coverage=1 00:13:09.024 --rc genhtml_function_coverage=1 00:13:09.024 --rc genhtml_legend=1 00:13:09.024 --rc geninfo_all_blocks=1 00:13:09.024 --rc geninfo_unexecuted_blocks=1 00:13:09.024 00:13:09.024 ' 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:09.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.024 --rc genhtml_branch_coverage=1 00:13:09.024 --rc genhtml_function_coverage=1 00:13:09.024 --rc genhtml_legend=1 00:13:09.024 --rc geninfo_all_blocks=1 00:13:09.024 --rc geninfo_unexecuted_blocks=1 00:13:09.024 00:13:09.024 ' 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:09.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.024 --rc genhtml_branch_coverage=1 00:13:09.024 --rc genhtml_function_coverage=1 00:13:09.024 --rc genhtml_legend=1 00:13:09.024 --rc geninfo_all_blocks=1 00:13:09.024 --rc geninfo_unexecuted_blocks=1 00:13:09.024 00:13:09.024 ' 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:09.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.024 --rc genhtml_branch_coverage=1 00:13:09.024 --rc genhtml_function_coverage=1 00:13:09.024 --rc genhtml_legend=1 00:13:09.024 --rc geninfo_all_blocks=1 00:13:09.024 --rc geninfo_unexecuted_blocks=1 00:13:09.024 00:13:09.024 ' 00:13:09.024 04:03:56 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:13:09.024 04:03:56 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:09.024 04:03:56 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:09.024 04:03:56 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:09.025 04:03:56 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:09.025 04:03:56 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:09.025 #define SPDK_CONFIG_H 00:13:09.025 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:09.025 #define SPDK_CONFIG_APPS 1 00:13:09.025 #define SPDK_CONFIG_ARCH native 00:13:09.025 #define SPDK_CONFIG_ASAN 1 00:13:09.025 #undef SPDK_CONFIG_AVAHI 00:13:09.025 #undef SPDK_CONFIG_CET 00:13:09.025 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:09.025 #define SPDK_CONFIG_COVERAGE 1 00:13:09.025 #define SPDK_CONFIG_CROSS_PREFIX 00:13:09.025 #undef SPDK_CONFIG_CRYPTO 00:13:09.025 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:09.025 #undef SPDK_CONFIG_CUSTOMOCF 00:13:09.025 #undef SPDK_CONFIG_DAOS 00:13:09.025 #define SPDK_CONFIG_DAOS_DIR 00:13:09.025 #define SPDK_CONFIG_DEBUG 1 00:13:09.025 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:09.025 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:09.025 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:09.025 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:09.025 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:09.025 #undef SPDK_CONFIG_DPDK_UADK 00:13:09.025 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:09.025 #define SPDK_CONFIG_EXAMPLES 1 00:13:09.025 #undef SPDK_CONFIG_FC 00:13:09.025 #define SPDK_CONFIG_FC_PATH 00:13:09.025 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:09.025 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:09.025 #define SPDK_CONFIG_FSDEV 1 00:13:09.025 #undef SPDK_CONFIG_FUSE 00:13:09.025 #undef SPDK_CONFIG_FUZZER 00:13:09.025 #define SPDK_CONFIG_FUZZER_LIB 00:13:09.025 #undef SPDK_CONFIG_GOLANG 00:13:09.025 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:09.025 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:09.025 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:09.025 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:09.025 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:09.025 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:09.025 #undef SPDK_CONFIG_HAVE_LZ4 00:13:09.025 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:09.025 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:09.025 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:09.025 #define SPDK_CONFIG_IDXD 1 00:13:09.025 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:09.025 #undef SPDK_CONFIG_IPSEC_MB 00:13:09.025 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:09.025 #define SPDK_CONFIG_ISAL 1 00:13:09.025 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:09.025 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:09.025 #define SPDK_CONFIG_LIBDIR 00:13:09.025 #undef SPDK_CONFIG_LTO 00:13:09.025 #define SPDK_CONFIG_MAX_LCORES 128 00:13:09.025 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:09.025 #define SPDK_CONFIG_NVME_CUSE 1 00:13:09.025 #undef SPDK_CONFIG_OCF 00:13:09.025 #define SPDK_CONFIG_OCF_PATH 00:13:09.025 #define SPDK_CONFIG_OPENSSL_PATH 00:13:09.025 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:09.025 #define SPDK_CONFIG_PGO_DIR 00:13:09.025 #undef SPDK_CONFIG_PGO_USE 00:13:09.025 #define SPDK_CONFIG_PREFIX /usr/local 00:13:09.025 #undef SPDK_CONFIG_RAID5F 00:13:09.025 #undef SPDK_CONFIG_RBD 00:13:09.025 #define SPDK_CONFIG_RDMA 1 00:13:09.025 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:09.025 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:09.025 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:09.025 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:09.025 #define SPDK_CONFIG_SHARED 1 00:13:09.025 #undef SPDK_CONFIG_SMA 00:13:09.025 #define SPDK_CONFIG_TESTS 1 00:13:09.025 #undef SPDK_CONFIG_TSAN 00:13:09.025 #define SPDK_CONFIG_UBLK 1 00:13:09.025 #define SPDK_CONFIG_UBSAN 1 00:13:09.025 #undef SPDK_CONFIG_UNIT_TESTS 00:13:09.025 #undef SPDK_CONFIG_URING 00:13:09.025 #define SPDK_CONFIG_URING_PATH 00:13:09.025 #undef SPDK_CONFIG_URING_ZNS 00:13:09.025 #undef SPDK_CONFIG_USDT 00:13:09.025 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:09.025 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:09.025 #undef SPDK_CONFIG_VFIO_USER 00:13:09.025 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:09.025 #define SPDK_CONFIG_VHOST 1 00:13:09.025 #define SPDK_CONFIG_VIRTIO 1 00:13:09.025 #undef SPDK_CONFIG_VTUNE 00:13:09.025 #define SPDK_CONFIG_VTUNE_DIR 00:13:09.025 #define SPDK_CONFIG_WERROR 1 00:13:09.025 #define SPDK_CONFIG_WPDK_DIR 00:13:09.025 #define SPDK_CONFIG_XNVME 1 00:13:09.025 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:09.025 04:03:56 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:09.025 04:03:56 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:09.025 04:03:56 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.025 04:03:56 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.025 04:03:56 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.025 04:03:56 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.025 04:03:56 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.026 04:03:56 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.026 04:03:56 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.026 04:03:56 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:09.026 04:03:56 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@68 -- # uname -s 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:09.026 04:03:56 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:09.026 04:03:56 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68606 ]] 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68606 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.8HrkMS 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.8HrkMS/tests/xnvme /tmp/spdk.8HrkMS 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974777856 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593501696 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:13:09.027 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260625408 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974777856 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593501696 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95992119296 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3710660608 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:09.028 * Looking for test storage... 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974777856 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:09.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:09.028 04:03:56 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.028 04:03:56 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:09.029 04:03:56 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.029 04:03:56 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:09.291 04:03:56 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:09.291 04:03:56 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.291 04:03:56 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:09.291 04:03:56 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.291 04:03:56 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.291 04:03:56 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.291 04:03:56 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:09.291 04:03:56 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.291 04:03:56 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:09.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.291 --rc genhtml_branch_coverage=1 00:13:09.291 --rc genhtml_function_coverage=1 00:13:09.291 --rc genhtml_legend=1 00:13:09.291 --rc geninfo_all_blocks=1 00:13:09.291 --rc geninfo_unexecuted_blocks=1 00:13:09.291 00:13:09.291 ' 00:13:09.291 04:03:56 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:09.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.291 --rc genhtml_branch_coverage=1 00:13:09.291 --rc genhtml_function_coverage=1 00:13:09.291 --rc genhtml_legend=1 00:13:09.291 --rc geninfo_all_blocks=1 00:13:09.291 --rc geninfo_unexecuted_blocks=1 00:13:09.291 00:13:09.291 ' 00:13:09.291 04:03:56 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:09.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.291 --rc genhtml_branch_coverage=1 00:13:09.291 --rc genhtml_function_coverage=1 00:13:09.291 --rc genhtml_legend=1 00:13:09.291 --rc geninfo_all_blocks=1 00:13:09.291 --rc geninfo_unexecuted_blocks=1 00:13:09.291 00:13:09.291 ' 00:13:09.291 04:03:56 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:09.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.291 --rc genhtml_branch_coverage=1 00:13:09.291 --rc genhtml_function_coverage=1 00:13:09.291 --rc genhtml_legend=1 00:13:09.291 --rc geninfo_all_blocks=1 00:13:09.291 --rc geninfo_unexecuted_blocks=1 00:13:09.291 00:13:09.291 ' 00:13:09.291 04:03:56 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:09.291 04:03:56 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.291 04:03:56 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.292 04:03:56 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.292 04:03:56 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.292 04:03:56 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.292 04:03:56 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.292 04:03:56 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.292 04:03:56 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:09.292 04:03:56 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:13:09.292 04:03:56 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:09.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:09.554 Waiting for block devices as requested 00:13:09.554 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:09.554 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:09.816 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:09.816 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.121 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:15.121 04:04:02 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:13:15.121 04:04:02 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:13:15.121 04:04:02 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:13:15.383 04:04:02 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:13:15.383 04:04:02 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:15.383 No valid GPT data, bailing 00:13:15.383 04:04:02 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:15.383 04:04:02 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:13:15.383 04:04:02 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:15.383 04:04:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:15.383 04:04:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:15.383 04:04:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.383 04:04:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.383 ************************************ 00:13:15.383 START TEST xnvme_rpc 00:13:15.383 ************************************ 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:15.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68990 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68990 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68990 ']' 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.383 04:04:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.383 [2024-12-06 04:04:02.848693] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:13:15.383 [2024-12-06 04:04:02.848819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68990 ] 00:13:15.644 [2024-12-06 04:04:03.007624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.645 [2024-12-06 04:04:03.101683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.218 xnvme_bdev 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.218 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68990 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68990 ']' 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68990 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68990 00:13:16.480 killing process with pid 68990 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68990' 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68990 00:13:16.480 04:04:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68990 00:13:17.870 ************************************ 00:13:17.870 END TEST xnvme_rpc 00:13:17.870 ************************************ 00:13:17.870 00:13:17.870 real 0m2.406s 00:13:17.870 user 0m2.511s 00:13:17.870 sys 0m0.343s 00:13:17.870 04:04:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.870 04:04:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.870 04:04:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:17.870 04:04:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:17.870 04:04:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.870 04:04:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.870 ************************************ 00:13:17.870 START TEST xnvme_bdevperf 00:13:17.870 ************************************ 00:13:17.870 04:04:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:17.870 04:04:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:17.870 04:04:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:17.870 04:04:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:17.870 04:04:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:17.870 04:04:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:17.870 04:04:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:17.870 04:04:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:17.870 { 00:13:17.870 "subsystems": [ 00:13:17.870 { 00:13:17.870 "subsystem": "bdev", 00:13:17.870 "config": [ 00:13:17.870 { 00:13:17.870 "params": { 00:13:17.870 "io_mechanism": "libaio", 00:13:17.870 "conserve_cpu": false, 00:13:17.870 "filename": "/dev/nvme0n1", 00:13:17.870 "name": "xnvme_bdev" 00:13:17.870 }, 00:13:17.870 "method": "bdev_xnvme_create" 00:13:17.870 }, 00:13:17.870 { 00:13:17.870 "method": "bdev_wait_for_examine" 00:13:17.870 } 00:13:17.870 ] 00:13:17.870 } 00:13:17.870 ] 00:13:17.870 } 00:13:17.870 [2024-12-06 04:04:05.305300] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:13:17.871 [2024-12-06 04:04:05.305569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69059 ] 00:13:18.132 [2024-12-06 04:04:05.460969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.132 [2024-12-06 04:04:05.538905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.394 Running I/O for 5 seconds... 00:13:20.286 34716.00 IOPS, 135.61 MiB/s [2024-12-06T04:04:08.774Z] 33316.00 IOPS, 130.14 MiB/s [2024-12-06T04:04:10.173Z] 34300.33 IOPS, 133.99 MiB/s [2024-12-06T04:04:11.117Z] 34057.75 IOPS, 133.04 MiB/s [2024-12-06T04:04:11.117Z] 34036.60 IOPS, 132.96 MiB/s 00:13:23.590 Latency(us) 00:13:23.590 [2024-12-06T04:04:11.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.590 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:23.590 xnvme_bdev : 5.01 34004.15 132.83 0.00 0.00 1877.78 441.11 7662.67 00:13:23.590 [2024-12-06T04:04:11.117Z] =================================================================================================================== 00:13:23.590 [2024-12-06T04:04:11.117Z] Total : 34004.15 132.83 0.00 0.00 1877.78 441.11 7662.67 00:13:24.176 04:04:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:24.176 04:04:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:24.176 04:04:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:24.176 04:04:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:24.176 04:04:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:24.176 { 00:13:24.176 "subsystems": [ 00:13:24.176 { 00:13:24.176 "subsystem": "bdev", 00:13:24.176 "config": [ 00:13:24.176 { 00:13:24.176 "params": { 00:13:24.176 "io_mechanism": "libaio", 00:13:24.176 "conserve_cpu": false, 00:13:24.176 "filename": "/dev/nvme0n1", 00:13:24.176 "name": "xnvme_bdev" 00:13:24.176 }, 00:13:24.176 "method": "bdev_xnvme_create" 00:13:24.176 }, 00:13:24.176 { 00:13:24.176 "method": "bdev_wait_for_examine" 00:13:24.176 } 00:13:24.176 ] 00:13:24.176 } 00:13:24.176 ] 00:13:24.176 } 00:13:24.176 [2024-12-06 04:04:11.633001] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:13:24.176 [2024-12-06 04:04:11.633377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69135 ] 00:13:24.438 [2024-12-06 04:04:11.797105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.438 [2024-12-06 04:04:11.928986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.008 Running I/O for 5 seconds... 00:13:26.913 35538.00 IOPS, 138.82 MiB/s [2024-12-06T04:04:15.379Z] 35545.00 IOPS, 138.85 MiB/s [2024-12-06T04:04:16.321Z] 35330.33 IOPS, 138.01 MiB/s [2024-12-06T04:04:17.268Z] 34990.25 IOPS, 136.68 MiB/s [2024-12-06T04:04:17.268Z] 35133.40 IOPS, 137.24 MiB/s 00:13:29.741 Latency(us) 00:13:29.741 [2024-12-06T04:04:17.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.741 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:29.741 xnvme_bdev : 5.00 35118.08 137.18 0.00 0.00 1818.02 450.56 10132.87 00:13:29.741 [2024-12-06T04:04:17.268Z] =================================================================================================================== 00:13:29.741 [2024-12-06T04:04:17.268Z] Total : 35118.08 137.18 0.00 0.00 1818.02 450.56 10132.87 00:13:30.730 00:13:30.730 real 0m12.823s 00:13:30.730 user 0m4.881s 00:13:30.730 sys 0m6.032s 00:13:30.730 ************************************ 00:13:30.730 END TEST xnvme_bdevperf 00:13:30.730 ************************************ 00:13:30.730 04:04:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.730 04:04:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:30.730 04:04:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:30.730 04:04:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:30.730 04:04:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.730 04:04:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.730 ************************************ 00:13:30.730 START TEST xnvme_fio_plugin 00:13:30.730 ************************************ 00:13:30.730 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:30.730 04:04:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:30.730 04:04:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:30.730 04:04:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:30.730 04:04:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:30.730 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:30.730 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:30.730 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:30.730 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:30.730 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:30.730 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:30.730 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:30.731 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:30.731 04:04:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:30.731 04:04:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:30.731 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:30.731 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:30.731 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:30.731 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:30.731 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:30.731 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:30.731 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:30.731 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:30.731 04:04:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:30.731 { 00:13:30.731 "subsystems": [ 00:13:30.731 { 00:13:30.731 "subsystem": "bdev", 00:13:30.731 "config": [ 00:13:30.731 { 00:13:30.731 "params": { 00:13:30.731 "io_mechanism": "libaio", 00:13:30.731 "conserve_cpu": false, 00:13:30.731 "filename": "/dev/nvme0n1", 00:13:30.731 "name": "xnvme_bdev" 00:13:30.731 }, 00:13:30.731 "method": "bdev_xnvme_create" 00:13:30.731 }, 00:13:30.731 { 00:13:30.731 "method": "bdev_wait_for_examine" 00:13:30.731 } 00:13:30.731 ] 00:13:30.731 } 00:13:30.731 ] 00:13:30.731 } 00:13:30.992 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:30.993 fio-3.35 00:13:30.993 Starting 1 thread 00:13:37.583 00:13:37.583 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69255: Fri Dec 6 04:04:24 2024 00:13:37.583 read: IOPS=34.8k, BW=136MiB/s (142MB/s)(680MiB/5001msec) 00:13:37.583 slat (usec): min=4, max=2488, avg=18.75, stdev=89.09 00:13:37.583 clat (usec): min=106, max=5129, avg=1320.83, stdev=521.98 00:13:37.583 lat (usec): min=203, max=5143, avg=1339.59, stdev=514.87 00:13:37.583 clat percentiles (usec): 00:13:37.583 | 1.00th=[ 302], 5.00th=[ 553], 10.00th=[ 725], 20.00th=[ 906], 00:13:37.583 | 30.00th=[ 1029], 40.00th=[ 1156], 50.00th=[ 1270], 60.00th=[ 1401], 00:13:37.583 | 70.00th=[ 1532], 80.00th=[ 1713], 90.00th=[ 1975], 95.00th=[ 2212], 00:13:37.583 | 99.00th=[ 2933], 99.50th=[ 3261], 99.90th=[ 3949], 99.95th=[ 4228], 00:13:37.583 | 99.99th=[ 4621] 00:13:37.583 bw ( KiB/s): min=129104, max=154368, per=99.88%, avg=138982.00, stdev=8275.70, samples=9 00:13:37.583 iops : min=32276, max=38592, avg=34745.44, stdev=2068.91, samples=9 00:13:37.583 lat (usec) : 250=0.51%, 500=3.30%, 750=7.16%, 1000=16.38% 00:13:37.583 lat (msec) : 2=63.56%, 4=9.00%, 10=0.09% 00:13:37.583 cpu : usr=48.96%, sys=43.10%, ctx=19, majf=0, minf=764 00:13:37.583 IO depths : 1=0.7%, 2=1.5%, 4=3.4%, 8=8.5%, 16=22.5%, 32=61.3%, >=64=2.1% 00:13:37.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.583 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:37.583 issued rwts: total=173978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.583 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:37.583 00:13:37.583 Run status group 0 (all jobs): 00:13:37.583 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=680MiB (713MB), run=5001-5001msec 00:13:37.583 ----------------------------------------------------- 00:13:37.583 Suppressions used: 00:13:37.583 count bytes template 00:13:37.583 1 11 /usr/src/fio/parse.c 00:13:37.583 1 8 libtcmalloc_minimal.so 00:13:37.583 1 904 libcrypto.so 00:13:37.583 ----------------------------------------------------- 00:13:37.583 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:37.583 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:37.844 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:37.844 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:37.844 04:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:37.844 { 00:13:37.844 "subsystems": [ 00:13:37.844 { 00:13:37.844 "subsystem": "bdev", 00:13:37.844 "config": [ 00:13:37.844 { 00:13:37.844 "params": { 00:13:37.844 "io_mechanism": "libaio", 00:13:37.844 "conserve_cpu": false, 00:13:37.844 "filename": "/dev/nvme0n1", 00:13:37.844 "name": "xnvme_bdev" 00:13:37.844 }, 00:13:37.844 "method": "bdev_xnvme_create" 00:13:37.844 }, 00:13:37.844 { 00:13:37.844 "method": "bdev_wait_for_examine" 00:13:37.844 } 00:13:37.844 ] 00:13:37.844 } 00:13:37.844 ] 00:13:37.844 } 00:13:37.844 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:37.844 fio-3.35 00:13:37.844 Starting 1 thread 00:13:44.457 00:13:44.457 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69349: Fri Dec 6 04:04:31 2024 00:13:44.457 write: IOPS=36.8k, BW=144MiB/s (151MB/s)(718MiB/5001msec); 0 zone resets 00:13:44.457 slat (usec): min=4, max=1841, avg=21.24, stdev=71.18 00:13:44.457 clat (usec): min=28, max=10412, avg=1154.68, stdev=568.36 00:13:44.457 lat (usec): min=166, max=10416, avg=1175.93, stdev=564.98 00:13:44.457 clat percentiles (usec): 00:13:44.457 | 1.00th=[ 243], 5.00th=[ 383], 10.00th=[ 502], 20.00th=[ 693], 00:13:44.457 | 30.00th=[ 832], 40.00th=[ 963], 50.00th=[ 1090], 60.00th=[ 1221], 00:13:44.457 | 70.00th=[ 1369], 80.00th=[ 1565], 90.00th=[ 1844], 95.00th=[ 2114], 00:13:44.457 | 99.00th=[ 2900], 99.50th=[ 3228], 99.90th=[ 4015], 99.95th=[ 4621], 00:13:44.457 | 99.99th=[ 9110] 00:13:44.457 bw ( KiB/s): min=124520, max=169808, per=100.00%, avg=147981.00, stdev=16207.96, samples=9 00:13:44.457 iops : min=31130, max=42452, avg=36995.11, stdev=4052.14, samples=9 00:13:44.457 lat (usec) : 50=0.01%, 100=0.01%, 250=1.11%, 500=8.75%, 750=13.99% 00:13:44.457 lat (usec) : 1000=19.40% 00:13:44.457 lat (msec) : 2=49.91%, 4=6.73%, 10=0.10%, 20=0.01% 00:13:44.457 cpu : usr=35.62%, sys=51.02%, ctx=17, majf=0, minf=765 00:13:44.457 IO depths : 1=0.3%, 2=0.9%, 4=3.0%, 8=9.1%, 16=24.2%, 32=60.5%, >=64=2.0% 00:13:44.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.457 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:13:44.457 issued rwts: total=0,183844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.457 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:44.457 00:13:44.457 Run status group 0 (all jobs): 00:13:44.457 WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=718MiB (753MB), run=5001-5001msec 00:13:44.719 ----------------------------------------------------- 00:13:44.719 Suppressions used: 00:13:44.719 count bytes template 00:13:44.719 1 11 /usr/src/fio/parse.c 00:13:44.719 1 8 libtcmalloc_minimal.so 00:13:44.719 1 904 libcrypto.so 00:13:44.719 ----------------------------------------------------- 00:13:44.719 00:13:44.719 ************************************ 00:13:44.719 END TEST xnvme_fio_plugin 00:13:44.719 ************************************ 00:13:44.719 00:13:44.719 real 0m13.973s 00:13:44.719 user 0m7.171s 00:13:44.719 sys 0m5.336s 00:13:44.719 04:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.719 04:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:44.719 04:04:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:44.719 04:04:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:44.719 04:04:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:44.719 04:04:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:44.719 04:04:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.719 04:04:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.719 04:04:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.719 ************************************ 00:13:44.719 START TEST xnvme_rpc 00:13:44.719 ************************************ 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:44.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69435 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69435 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69435 ']' 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.719 04:04:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 [2024-12-06 04:04:32.274620] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:13:44.981 [2024-12-06 04:04:32.275332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69435 ] 00:13:44.981 [2024-12-06 04:04:32.449054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.242 [2024-12-06 04:04:32.583391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.816 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.816 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:45.816 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:45.816 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.816 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.816 xnvme_bdev 00:13:45.816 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.816 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:45.816 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:45.816 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:45.816 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.816 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.077 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69435 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69435 ']' 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69435 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69435 00:13:46.078 killing process with pid 69435 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69435' 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69435 00:13:46.078 04:04:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69435 00:13:47.995 ************************************ 00:13:47.995 END TEST xnvme_rpc 00:13:47.995 ************************************ 00:13:47.995 00:13:47.995 real 0m3.041s 00:13:47.995 user 0m2.995s 00:13:47.995 sys 0m0.511s 00:13:47.995 04:04:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.995 04:04:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.995 04:04:35 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:47.995 04:04:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:47.995 04:04:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.995 04:04:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.995 ************************************ 00:13:47.995 START TEST xnvme_bdevperf 00:13:47.995 ************************************ 00:13:47.995 04:04:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:47.995 04:04:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:47.995 04:04:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:47.995 04:04:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:47.995 04:04:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:47.995 04:04:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:47.995 04:04:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:47.995 04:04:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:47.995 { 00:13:47.995 "subsystems": [ 00:13:47.995 { 00:13:47.995 "subsystem": "bdev", 00:13:47.995 "config": [ 00:13:47.995 { 00:13:47.995 "params": { 00:13:47.995 "io_mechanism": "libaio", 00:13:47.995 "conserve_cpu": true, 00:13:47.995 "filename": "/dev/nvme0n1", 00:13:47.995 "name": "xnvme_bdev" 00:13:47.995 }, 00:13:47.995 "method": "bdev_xnvme_create" 00:13:47.995 }, 00:13:47.995 { 00:13:47.995 "method": "bdev_wait_for_examine" 00:13:47.995 } 00:13:47.995 ] 00:13:47.995 } 00:13:47.995 ] 00:13:47.995 } 00:13:47.995 [2024-12-06 04:04:35.363508] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:13:47.995 [2024-12-06 04:04:35.363669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69509 ] 00:13:48.255 [2024-12-06 04:04:35.528174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.255 [2024-12-06 04:04:35.650779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.514 Running I/O for 5 seconds... 00:13:50.445 32052.00 IOPS, 125.20 MiB/s [2024-12-06T04:04:39.352Z] 31982.00 IOPS, 124.93 MiB/s [2024-12-06T04:04:40.321Z] 32144.00 IOPS, 125.56 MiB/s [2024-12-06T04:04:41.264Z] 32093.00 IOPS, 125.36 MiB/s 00:13:53.737 Latency(us) 00:13:53.737 [2024-12-06T04:04:41.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.737 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:53.737 xnvme_bdev : 5.00 32162.35 125.63 0.00 0.00 1985.25 270.97 7108.14 00:13:53.737 [2024-12-06T04:04:41.264Z] =================================================================================================================== 00:13:53.737 [2024-12-06T04:04:41.264Z] Total : 32162.35 125.63 0.00 0.00 1985.25 270.97 7108.14 00:13:54.311 04:04:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:54.311 04:04:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:54.311 04:04:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:54.311 04:04:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:54.311 04:04:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:54.311 { 00:13:54.311 "subsystems": [ 00:13:54.311 { 00:13:54.311 "subsystem": "bdev", 00:13:54.311 "config": [ 00:13:54.311 { 00:13:54.311 "params": { 00:13:54.311 "io_mechanism": "libaio", 00:13:54.311 "conserve_cpu": true, 00:13:54.311 "filename": "/dev/nvme0n1", 00:13:54.311 "name": "xnvme_bdev" 00:13:54.311 }, 00:13:54.311 "method": "bdev_xnvme_create" 00:13:54.311 }, 00:13:54.311 { 00:13:54.311 "method": "bdev_wait_for_examine" 00:13:54.311 } 00:13:54.311 ] 00:13:54.311 } 00:13:54.311 ] 00:13:54.311 } 00:13:54.573 [2024-12-06 04:04:41.865672] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:13:54.573 [2024-12-06 04:04:41.865848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69584 ] 00:13:54.573 [2024-12-06 04:04:42.031207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.834 [2024-12-06 04:04:42.165735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.097 Running I/O for 5 seconds... 00:13:56.988 36216.00 IOPS, 141.47 MiB/s [2024-12-06T04:04:45.925Z] 28494.00 IOPS, 111.30 MiB/s [2024-12-06T04:04:46.494Z] 20102.67 IOPS, 78.53 MiB/s [2024-12-06T04:04:47.877Z] 15953.75 IOPS, 62.32 MiB/s [2024-12-06T04:04:47.877Z] 13463.20 IOPS, 52.59 MiB/s 00:14:00.350 Latency(us) 00:14:00.350 [2024-12-06T04:04:47.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.350 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:00.350 xnvme_bdev : 5.02 13420.75 52.42 0.00 0.00 4754.04 64.59 41741.39 00:14:00.350 [2024-12-06T04:04:47.877Z] =================================================================================================================== 00:14:00.350 [2024-12-06T04:04:47.877Z] Total : 13420.75 52.42 0.00 0.00 4754.04 64.59 41741.39 00:14:00.923 00:14:00.923 real 0m13.045s 00:14:00.923 user 0m7.459s 00:14:00.923 sys 0m4.296s 00:14:00.923 ************************************ 00:14:00.923 END TEST xnvme_bdevperf 00:14:00.923 ************************************ 00:14:00.923 04:04:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.923 04:04:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:00.923 04:04:48 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:00.923 04:04:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:00.923 04:04:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.923 04:04:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.923 ************************************ 00:14:00.923 START TEST xnvme_fio_plugin 00:14:00.923 ************************************ 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:00.923 04:04:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:00.923 { 00:14:00.923 "subsystems": [ 00:14:00.923 { 00:14:00.923 "subsystem": "bdev", 00:14:00.923 "config": [ 00:14:00.923 { 00:14:00.923 "params": { 00:14:00.923 "io_mechanism": "libaio", 00:14:00.923 "conserve_cpu": true, 00:14:00.923 "filename": "/dev/nvme0n1", 00:14:00.923 "name": "xnvme_bdev" 00:14:00.923 }, 00:14:00.923 "method": "bdev_xnvme_create" 00:14:00.923 }, 00:14:00.923 { 00:14:00.923 "method": "bdev_wait_for_examine" 00:14:00.923 } 00:14:00.923 ] 00:14:00.923 } 00:14:00.923 ] 00:14:00.923 } 00:14:01.183 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:01.183 fio-3.35 00:14:01.183 Starting 1 thread 00:14:07.772 00:14:07.772 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69708: Fri Dec 6 04:04:54 2024 00:14:07.772 read: IOPS=34.2k, BW=134MiB/s (140MB/s)(668MiB/5001msec) 00:14:07.772 slat (usec): min=4, max=2249, avg=20.74, stdev=93.26 00:14:07.772 clat (usec): min=105, max=5171, avg=1309.69, stdev=522.63 00:14:07.772 lat (usec): min=177, max=5189, avg=1330.43, stdev=514.23 00:14:07.772 clat percentiles (usec): 00:14:07.772 | 1.00th=[ 273], 5.00th=[ 486], 10.00th=[ 668], 20.00th=[ 889], 00:14:07.772 | 30.00th=[ 1045], 40.00th=[ 1172], 50.00th=[ 1287], 60.00th=[ 1401], 00:14:07.772 | 70.00th=[ 1532], 80.00th=[ 1696], 90.00th=[ 1958], 95.00th=[ 2212], 00:14:07.772 | 99.00th=[ 2802], 99.50th=[ 3097], 99.90th=[ 3752], 99.95th=[ 3982], 00:14:07.772 | 99.99th=[ 5080] 00:14:07.772 bw ( KiB/s): min=121848, max=148448, per=99.25%, avg=135811.56, stdev=8999.80, samples=9 00:14:07.772 iops : min=30462, max=37112, avg=33952.89, stdev=2249.95, samples=9 00:14:07.772 lat (usec) : 250=0.71%, 500=4.59%, 750=7.98%, 1000=13.55% 00:14:07.772 lat (msec) : 2=64.17%, 4=8.94%, 10=0.05% 00:14:07.772 cpu : usr=42.60%, sys=49.10%, ctx=13, majf=0, minf=764 00:14:07.772 IO depths : 1=0.5%, 2=1.3%, 4=3.2%, 8=8.7%, 16=23.3%, 32=60.8%, >=64=2.1% 00:14:07.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.772 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:14:07.772 issued rwts: total=171081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.772 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:07.772 00:14:07.772 Run status group 0 (all jobs): 00:14:07.772 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=668MiB (701MB), run=5001-5001msec 00:14:08.034 ----------------------------------------------------- 00:14:08.034 Suppressions used: 00:14:08.034 count bytes template 00:14:08.034 1 11 /usr/src/fio/parse.c 00:14:08.034 1 8 libtcmalloc_minimal.so 00:14:08.034 1 904 libcrypto.so 00:14:08.034 ----------------------------------------------------- 00:14:08.034 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:08.034 04:04:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:08.034 { 00:14:08.034 "subsystems": [ 00:14:08.034 { 00:14:08.034 "subsystem": "bdev", 00:14:08.034 "config": [ 00:14:08.034 { 00:14:08.034 "params": { 00:14:08.034 "io_mechanism": "libaio", 00:14:08.034 "conserve_cpu": true, 00:14:08.034 "filename": "/dev/nvme0n1", 00:14:08.034 "name": "xnvme_bdev" 00:14:08.034 }, 00:14:08.034 "method": "bdev_xnvme_create" 00:14:08.034 }, 00:14:08.034 { 00:14:08.034 "method": "bdev_wait_for_examine" 00:14:08.034 } 00:14:08.034 ] 00:14:08.034 } 00:14:08.034 ] 00:14:08.034 } 00:14:08.034 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:08.034 fio-3.35 00:14:08.034 Starting 1 thread 00:14:14.662 00:14:14.662 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69795: Fri Dec 6 04:05:01 2024 00:14:14.662 write: IOPS=33.8k, BW=132MiB/s (138MB/s)(660MiB/5001msec); 0 zone resets 00:14:14.662 slat (usec): min=4, max=2534, avg=21.34, stdev=91.94 00:14:14.662 clat (usec): min=51, max=26605, avg=1322.01, stdev=733.10 00:14:14.662 lat (usec): min=129, max=26609, avg=1343.35, stdev=727.56 00:14:14.662 clat percentiles (usec): 00:14:14.662 | 1.00th=[ 277], 5.00th=[ 478], 10.00th=[ 644], 20.00th=[ 857], 00:14:14.662 | 30.00th=[ 1012], 40.00th=[ 1139], 50.00th=[ 1270], 60.00th=[ 1401], 00:14:14.662 | 70.00th=[ 1549], 80.00th=[ 1713], 90.00th=[ 1975], 95.00th=[ 2245], 00:14:14.662 | 99.00th=[ 2999], 99.50th=[ 3458], 99.90th=[ 7504], 99.95th=[11338], 00:14:14.662 | 99.99th=[25560] 00:14:14.662 bw ( KiB/s): min=125992, max=150344, per=99.80%, avg=134785.33, stdev=7744.31, samples=9 00:14:14.662 iops : min=31498, max=37586, avg=33696.33, stdev=1936.08, samples=9 00:14:14.662 lat (usec) : 100=0.01%, 250=0.69%, 500=4.81%, 750=8.81%, 1000=14.76% 00:14:14.662 lat (msec) : 2=61.67%, 4=8.99%, 10=0.20%, 20=0.04%, 50=0.03% 00:14:14.662 cpu : usr=42.70%, sys=48.56%, ctx=15, majf=0, minf=765 00:14:14.662 IO depths : 1=0.5%, 2=1.2%, 4=3.1%, 8=8.7%, 16=23.3%, 32=61.1%, >=64=2.1% 00:14:14.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.662 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:14:14.662 issued rwts: total=0,168845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:14.662 00:14:14.662 Run status group 0 (all jobs): 00:14:14.662 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=660MiB (692MB), run=5001-5001msec 00:14:14.950 ----------------------------------------------------- 00:14:14.950 Suppressions used: 00:14:14.950 count bytes template 00:14:14.950 1 11 /usr/src/fio/parse.c 00:14:14.950 1 8 libtcmalloc_minimal.so 00:14:14.950 1 904 libcrypto.so 00:14:14.950 ----------------------------------------------------- 00:14:14.950 00:14:14.950 00:14:14.950 real 0m13.850s 00:14:14.950 user 0m7.117s 00:14:14.950 sys 0m5.489s 00:14:14.950 04:05:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.950 ************************************ 00:14:14.950 END TEST xnvme_fio_plugin 00:14:14.950 ************************************ 00:14:14.950 04:05:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:14.950 04:05:02 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:14.950 04:05:02 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:14.950 04:05:02 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:14.950 04:05:02 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:14.950 04:05:02 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:14.950 04:05:02 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:14.950 04:05:02 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:14.950 04:05:02 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:14.950 04:05:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:14.950 04:05:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:14.950 04:05:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.950 04:05:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.950 ************************************ 00:14:14.950 START TEST xnvme_rpc 00:14:14.950 ************************************ 00:14:14.950 04:05:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:14.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69886 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69886 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69886 ']' 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.951 04:05:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.951 [2024-12-06 04:05:02.410559] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:14:14.951 [2024-12-06 04:05:02.410707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69886 ] 00:14:15.213 [2024-12-06 04:05:02.577235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.213 [2024-12-06 04:05:02.716031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.159 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.159 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:16.159 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:14:16.159 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.160 xnvme_bdev 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69886 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69886 ']' 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69886 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69886 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.160 killing process with pid 69886 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69886' 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69886 00:14:16.160 04:05:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69886 00:14:18.075 ************************************ 00:14:18.075 END TEST xnvme_rpc 00:14:18.075 ************************************ 00:14:18.075 00:14:18.075 real 0m2.994s 00:14:18.075 user 0m3.013s 00:14:18.075 sys 0m0.480s 00:14:18.075 04:05:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.075 04:05:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.075 04:05:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:18.075 04:05:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:18.075 04:05:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.075 04:05:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.075 ************************************ 00:14:18.075 START TEST xnvme_bdevperf 00:14:18.075 ************************************ 00:14:18.075 04:05:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:18.075 04:05:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:18.075 04:05:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:18.075 04:05:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:18.075 04:05:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:18.075 04:05:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:18.075 04:05:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:18.075 04:05:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:18.075 { 00:14:18.075 "subsystems": [ 00:14:18.075 { 00:14:18.075 "subsystem": "bdev", 00:14:18.075 "config": [ 00:14:18.075 { 00:14:18.075 "params": { 00:14:18.075 "io_mechanism": "io_uring", 00:14:18.075 "conserve_cpu": false, 00:14:18.075 "filename": "/dev/nvme0n1", 00:14:18.075 "name": "xnvme_bdev" 00:14:18.075 }, 00:14:18.075 "method": "bdev_xnvme_create" 00:14:18.075 }, 00:14:18.075 { 00:14:18.075 "method": "bdev_wait_for_examine" 00:14:18.075 } 00:14:18.075 ] 00:14:18.076 } 00:14:18.076 ] 00:14:18.076 } 00:14:18.076 [2024-12-06 04:05:05.465622] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:14:18.076 [2024-12-06 04:05:05.466008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69952 ] 00:14:18.336 [2024-12-06 04:05:05.632377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.336 [2024-12-06 04:05:05.768607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.596 Running I/O for 5 seconds... 00:14:20.968 29715.00 IOPS, 116.07 MiB/s [2024-12-06T04:05:09.439Z] 30368.00 IOPS, 118.62 MiB/s [2024-12-06T04:05:10.384Z] 30182.00 IOPS, 117.90 MiB/s [2024-12-06T04:05:11.381Z] 30033.25 IOPS, 117.32 MiB/s [2024-12-06T04:05:11.381Z] 30097.00 IOPS, 117.57 MiB/s 00:14:23.854 Latency(us) 00:14:23.854 [2024-12-06T04:05:11.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.854 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:23.854 xnvme_bdev : 5.01 30066.25 117.45 0.00 0.00 2122.55 141.00 40934.79 00:14:23.854 [2024-12-06T04:05:11.381Z] =================================================================================================================== 00:14:23.854 [2024-12-06T04:05:11.381Z] Total : 30066.25 117.45 0.00 0.00 2122.55 141.00 40934.79 00:14:24.425 04:05:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:24.425 04:05:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:24.425 04:05:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:24.426 04:05:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:24.426 04:05:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:24.426 { 00:14:24.426 "subsystems": [ 00:14:24.426 { 00:14:24.426 "subsystem": "bdev", 00:14:24.426 "config": [ 00:14:24.426 { 00:14:24.426 "params": { 00:14:24.426 "io_mechanism": "io_uring", 00:14:24.426 "conserve_cpu": false, 00:14:24.426 "filename": "/dev/nvme0n1", 00:14:24.426 "name": "xnvme_bdev" 00:14:24.426 }, 00:14:24.426 "method": "bdev_xnvme_create" 00:14:24.426 }, 00:14:24.426 { 00:14:24.426 "method": "bdev_wait_for_examine" 00:14:24.426 } 00:14:24.426 ] 00:14:24.426 } 00:14:24.426 ] 00:14:24.426 } 00:14:24.686 [2024-12-06 04:05:11.965381] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:14:24.686 [2024-12-06 04:05:11.965745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70032 ] 00:14:24.686 [2024-12-06 04:05:12.133092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.947 [2024-12-06 04:05:12.274689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.208 Running I/O for 5 seconds... 00:14:27.097 5221.00 IOPS, 20.39 MiB/s [2024-12-06T04:05:15.617Z] 5205.00 IOPS, 20.33 MiB/s [2024-12-06T04:05:16.999Z] 4927.00 IOPS, 19.25 MiB/s [2024-12-06T04:05:17.941Z] 4940.00 IOPS, 19.30 MiB/s [2024-12-06T04:05:17.941Z] 4977.80 IOPS, 19.44 MiB/s 00:14:30.414 Latency(us) 00:14:30.414 [2024-12-06T04:05:17.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.414 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:30.414 xnvme_bdev : 5.01 4977.50 19.44 0.00 0.00 12834.69 65.77 186323.89 00:14:30.414 [2024-12-06T04:05:17.941Z] =================================================================================================================== 00:14:30.414 [2024-12-06T04:05:17.941Z] Total : 4977.50 19.44 0.00 0.00 12834.69 65.77 186323.89 00:14:30.982 00:14:30.982 real 0m13.019s 00:14:30.982 user 0m6.082s 00:14:30.982 sys 0m6.644s 00:14:30.982 ************************************ 00:14:30.982 END TEST xnvme_bdevperf 00:14:30.982 ************************************ 00:14:30.982 04:05:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.982 04:05:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:30.982 04:05:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:30.982 04:05:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:30.982 04:05:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.982 04:05:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:30.982 ************************************ 00:14:30.982 START TEST xnvme_fio_plugin 00:14:30.982 ************************************ 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:30.982 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:30.983 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:30.983 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:30.983 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:30.983 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:30.983 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:30.983 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:30.983 04:05:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.243 { 00:14:31.243 "subsystems": [ 00:14:31.243 { 00:14:31.243 "subsystem": "bdev", 00:14:31.243 "config": [ 00:14:31.243 { 00:14:31.243 "params": { 00:14:31.243 "io_mechanism": "io_uring", 00:14:31.243 "conserve_cpu": false, 00:14:31.243 "filename": "/dev/nvme0n1", 00:14:31.243 "name": "xnvme_bdev" 00:14:31.243 }, 00:14:31.243 "method": "bdev_xnvme_create" 00:14:31.243 }, 00:14:31.243 { 00:14:31.243 "method": "bdev_wait_for_examine" 00:14:31.243 } 00:14:31.243 ] 00:14:31.243 } 00:14:31.243 ] 00:14:31.243 } 00:14:31.243 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:31.243 fio-3.35 00:14:31.243 Starting 1 thread 00:14:37.897 00:14:37.897 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70151: Fri Dec 6 04:05:24 2024 00:14:37.897 read: IOPS=35.6k, BW=139MiB/s (146MB/s)(695MiB/5003msec) 00:14:37.897 slat (usec): min=2, max=167, avg= 4.36, stdev= 2.52 00:14:37.897 clat (usec): min=627, max=4101, avg=1620.00, stdev=442.29 00:14:37.897 lat (usec): min=631, max=4105, avg=1624.35, stdev=442.99 00:14:37.897 clat percentiles (usec): 00:14:37.897 | 1.00th=[ 725], 5.00th=[ 832], 10.00th=[ 930], 20.00th=[ 1254], 00:14:37.897 | 30.00th=[ 1450], 40.00th=[ 1565], 50.00th=[ 1647], 60.00th=[ 1745], 00:14:37.897 | 70.00th=[ 1844], 80.00th=[ 1958], 90.00th=[ 2147], 95.00th=[ 2311], 00:14:37.897 | 99.00th=[ 2704], 99.50th=[ 2868], 99.90th=[ 3195], 99.95th=[ 3294], 00:14:37.897 | 99.99th=[ 3589] 00:14:37.897 bw ( KiB/s): min=123904, max=146432, per=92.74%, avg=131953.89, stdev=9098.85, samples=9 00:14:37.897 iops : min=30976, max=36608, avg=32988.44, stdev=2274.73, samples=9 00:14:37.897 lat (usec) : 750=1.82%, 1000=10.98% 00:14:37.897 lat (msec) : 2=69.90%, 4=17.30%, 10=0.01% 00:14:37.897 cpu : usr=33.21%, sys=65.29%, ctx=20, majf=0, minf=762 00:14:37.897 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:37.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.897 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:37.897 issued rwts: total=177960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.897 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.897 00:14:37.897 Run status group 0 (all jobs): 00:14:37.897 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=695MiB (729MB), run=5003-5003msec 00:14:37.897 ----------------------------------------------------- 00:14:37.897 Suppressions used: 00:14:37.897 count bytes template 00:14:37.897 1 11 /usr/src/fio/parse.c 00:14:37.897 1 8 libtcmalloc_minimal.so 00:14:37.897 1 904 libcrypto.so 00:14:37.897 ----------------------------------------------------- 00:14:37.897 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:37.897 04:05:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:37.897 { 00:14:37.897 "subsystems": [ 00:14:37.897 { 00:14:37.897 "subsystem": "bdev", 00:14:37.897 "config": [ 00:14:37.897 { 00:14:37.897 "params": { 00:14:37.897 "io_mechanism": "io_uring", 00:14:37.897 "conserve_cpu": false, 00:14:37.897 "filename": "/dev/nvme0n1", 00:14:37.897 "name": "xnvme_bdev" 00:14:37.897 }, 00:14:37.897 "method": "bdev_xnvme_create" 00:14:37.897 }, 00:14:37.897 { 00:14:37.897 "method": "bdev_wait_for_examine" 00:14:37.897 } 00:14:37.897 ] 00:14:37.897 } 00:14:37.897 ] 00:14:37.897 } 00:14:38.157 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:38.157 fio-3.35 00:14:38.157 Starting 1 thread 00:14:44.732 00:14:44.732 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70243: Fri Dec 6 04:05:31 2024 00:14:44.732 write: IOPS=42.2k, BW=165MiB/s (173MB/s)(825MiB/5001msec); 0 zone resets 00:14:44.732 slat (nsec): min=2758, max=65246, avg=3983.74, stdev=1739.38 00:14:44.732 clat (usec): min=55, max=168826, avg=1365.64, stdev=3391.26 00:14:44.732 lat (usec): min=59, max=168831, avg=1369.62, stdev=3391.33 00:14:44.732 clat percentiles (usec): 00:14:44.732 | 1.00th=[ 668], 5.00th=[ 709], 10.00th=[ 734], 20.00th=[ 783], 00:14:44.732 | 30.00th=[ 824], 40.00th=[ 857], 50.00th=[ 898], 60.00th=[ 938], 00:14:44.732 | 70.00th=[ 1004], 80.00th=[ 1090], 90.00th=[ 1270], 95.00th=[ 1631], 00:14:44.732 | 99.00th=[ 11207], 99.50th=[ 12518], 99.90th=[ 16581], 99.95th=[ 27395], 00:14:44.732 | 99.99th=[162530] 00:14:44.732 bw ( KiB/s): min=29560, max=242459, per=100.00%, avg=175079.44, stdev=94471.33, samples=9 00:14:44.732 iops : min= 7390, max=60614, avg=43769.78, stdev=23617.77, samples=9 00:14:44.732 lat (usec) : 100=0.05%, 250=0.24%, 500=0.17%, 750=12.89%, 1000=55.97% 00:14:44.732 lat (msec) : 2=26.09%, 4=0.07%, 10=2.74%, 20=1.68%, 50=0.06% 00:14:44.732 lat (msec) : 250=0.03% 00:14:44.732 cpu : usr=35.44%, sys=63.70%, ctx=18, majf=0, minf=763 00:14:44.732 IO depths : 1=1.5%, 2=3.0%, 4=5.9%, 8=11.9%, 16=23.7%, 32=50.9%, >=64=3.2% 00:14:44.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.732 complete : 0=0.0%, 4=98.2%, 8=0.2%, 16=0.1%, 32=0.1%, 64=1.4%, >=64=0.0% 00:14:44.732 issued rwts: total=0,211167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.732 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:44.732 00:14:44.732 Run status group 0 (all jobs): 00:14:44.732 WRITE: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=825MiB (865MB), run=5001-5001msec 00:14:44.732 ----------------------------------------------------- 00:14:44.732 Suppressions used: 00:14:44.732 count bytes template 00:14:44.732 1 11 /usr/src/fio/parse.c 00:14:44.732 1 8 libtcmalloc_minimal.so 00:14:44.732 1 904 libcrypto.so 00:14:44.732 ----------------------------------------------------- 00:14:44.732 00:14:44.732 00:14:44.732 real 0m13.615s 00:14:44.732 user 0m6.200s 00:14:44.732 sys 0m6.965s 00:14:44.732 04:05:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.732 04:05:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:44.732 ************************************ 00:14:44.732 END TEST xnvme_fio_plugin 00:14:44.732 ************************************ 00:14:44.732 04:05:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:44.732 04:05:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:44.732 04:05:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:44.732 04:05:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:44.732 04:05:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.732 04:05:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.732 04:05:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.732 ************************************ 00:14:44.732 START TEST xnvme_rpc 00:14:44.732 ************************************ 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70329 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70329 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70329 ']' 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.732 04:05:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.732 [2024-12-06 04:05:32.216383] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:14:44.732 [2024-12-06 04:05:32.216531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70329 ] 00:14:44.991 [2024-12-06 04:05:32.387390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.991 [2024-12-06 04:05:32.487934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.929 xnvme_bdev 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:45.929 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70329 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70329 ']' 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70329 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70329 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.930 killing process with pid 70329 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70329' 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70329 00:14:45.930 04:05:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70329 00:14:47.307 00:14:47.307 real 0m2.643s 00:14:47.307 user 0m2.747s 00:14:47.307 sys 0m0.360s 00:14:47.307 04:05:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.307 ************************************ 00:14:47.307 END TEST xnvme_rpc 00:14:47.307 ************************************ 00:14:47.307 04:05:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.307 04:05:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:47.307 04:05:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:47.307 04:05:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.307 04:05:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.307 ************************************ 00:14:47.307 START TEST xnvme_bdevperf 00:14:47.307 ************************************ 00:14:47.307 04:05:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:47.307 04:05:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:47.307 04:05:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:47.307 04:05:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:47.307 04:05:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:47.307 04:05:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:47.307 04:05:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:47.307 04:05:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:47.565 { 00:14:47.565 "subsystems": [ 00:14:47.565 { 00:14:47.565 "subsystem": "bdev", 00:14:47.565 "config": [ 00:14:47.565 { 00:14:47.565 "params": { 00:14:47.565 "io_mechanism": "io_uring", 00:14:47.565 "conserve_cpu": true, 00:14:47.566 "filename": "/dev/nvme0n1", 00:14:47.566 "name": "xnvme_bdev" 00:14:47.566 }, 00:14:47.566 "method": "bdev_xnvme_create" 00:14:47.566 }, 00:14:47.566 { 00:14:47.566 "method": "bdev_wait_for_examine" 00:14:47.566 } 00:14:47.566 ] 00:14:47.566 } 00:14:47.566 ] 00:14:47.566 } 00:14:47.566 [2024-12-06 04:05:34.864486] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:14:47.566 [2024-12-06 04:05:34.864584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70392 ] 00:14:47.566 [2024-12-06 04:05:35.016025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.824 [2024-12-06 04:05:35.116934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.083 Running I/O for 5 seconds... 00:14:49.956 52508.00 IOPS, 205.11 MiB/s [2024-12-06T04:05:38.417Z] 56396.00 IOPS, 220.30 MiB/s [2024-12-06T04:05:39.788Z] 56081.67 IOPS, 219.07 MiB/s [2024-12-06T04:05:40.721Z] 57300.75 IOPS, 223.83 MiB/s [2024-12-06T04:05:40.721Z] 57634.40 IOPS, 225.13 MiB/s 00:14:53.194 Latency(us) 00:14:53.194 [2024-12-06T04:05:40.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.194 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:53.194 xnvme_bdev : 5.00 57588.57 224.96 0.00 0.00 1106.65 74.83 12199.78 00:14:53.194 [2024-12-06T04:05:40.721Z] =================================================================================================================== 00:14:53.194 [2024-12-06T04:05:40.721Z] Total : 57588.57 224.96 0.00 0.00 1106.65 74.83 12199.78 00:14:53.760 04:05:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:53.760 04:05:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:53.760 04:05:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:53.760 04:05:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:53.760 04:05:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:53.760 { 00:14:53.760 "subsystems": [ 00:14:53.760 { 00:14:53.760 "subsystem": "bdev", 00:14:53.760 "config": [ 00:14:53.760 { 00:14:53.760 "params": { 00:14:53.760 "io_mechanism": "io_uring", 00:14:53.760 "conserve_cpu": true, 00:14:53.760 "filename": "/dev/nvme0n1", 00:14:53.760 "name": "xnvme_bdev" 00:14:53.760 }, 00:14:53.760 "method": "bdev_xnvme_create" 00:14:53.760 }, 00:14:53.760 { 00:14:53.760 "method": "bdev_wait_for_examine" 00:14:53.760 } 00:14:53.760 ] 00:14:53.760 } 00:14:53.760 ] 00:14:53.760 } 00:14:53.760 [2024-12-06 04:05:41.154560] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:14:53.760 [2024-12-06 04:05:41.154679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70467 ] 00:14:54.018 [2024-12-06 04:05:41.310785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.018 [2024-12-06 04:05:41.408493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.275 Running I/O for 5 seconds... 00:14:56.143 21340.00 IOPS, 83.36 MiB/s [2024-12-06T04:05:45.048Z] 30799.50 IOPS, 120.31 MiB/s [2024-12-06T04:05:45.979Z] 29632.67 IOPS, 115.75 MiB/s [2024-12-06T04:05:46.918Z] 27488.25 IOPS, 107.38 MiB/s [2024-12-06T04:05:46.919Z] 26440.60 IOPS, 103.28 MiB/s 00:14:59.392 Latency(us) 00:14:59.392 [2024-12-06T04:05:46.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.392 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:59.392 xnvme_bdev : 5.00 26418.05 103.20 0.00 0.00 2415.69 48.05 233913.11 00:14:59.392 [2024-12-06T04:05:46.919Z] =================================================================================================================== 00:14:59.392 [2024-12-06T04:05:46.919Z] Total : 26418.05 103.20 0.00 0.00 2415.69 48.05 233913.11 00:14:59.957 00:14:59.957 real 0m12.574s 00:14:59.957 user 0m7.276s 00:14:59.957 sys 0m4.131s 00:14:59.957 ************************************ 00:14:59.957 END TEST xnvme_bdevperf 00:14:59.957 ************************************ 00:14:59.957 04:05:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.957 04:05:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:59.957 04:05:47 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:59.957 04:05:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:59.957 04:05:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.957 04:05:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.957 ************************************ 00:14:59.957 START TEST xnvme_fio_plugin 00:14:59.957 ************************************ 00:14:59.957 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:59.957 04:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.958 04:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.958 { 00:14:59.958 "subsystems": [ 00:14:59.958 { 00:14:59.958 "subsystem": "bdev", 00:14:59.958 "config": [ 00:14:59.958 { 00:14:59.958 "params": { 00:14:59.958 "io_mechanism": "io_uring", 00:14:59.958 "conserve_cpu": true, 00:14:59.958 "filename": "/dev/nvme0n1", 00:14:59.958 "name": "xnvme_bdev" 00:14:59.958 }, 00:14:59.958 "method": "bdev_xnvme_create" 00:14:59.958 }, 00:14:59.958 { 00:14:59.958 "method": "bdev_wait_for_examine" 00:14:59.958 } 00:14:59.958 ] 00:14:59.958 } 00:14:59.958 ] 00:14:59.958 } 00:15:00.217 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:00.217 fio-3.35 00:15:00.217 Starting 1 thread 00:15:06.777 00:15:06.777 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70581: Fri Dec 6 04:05:53 2024 00:15:06.777 read: IOPS=60.3k, BW=236MiB/s (247MB/s)(1178MiB/5001msec) 00:15:06.777 slat (usec): min=2, max=221, avg= 3.76, stdev= 1.75 00:15:06.777 clat (usec): min=268, max=14252, avg=915.87, stdev=235.08 00:15:06.777 lat (usec): min=271, max=14255, avg=919.62, stdev=235.40 00:15:06.777 clat percentiles (usec): 00:15:06.777 | 1.00th=[ 652], 5.00th=[ 701], 10.00th=[ 725], 20.00th=[ 766], 00:15:06.777 | 30.00th=[ 799], 40.00th=[ 832], 50.00th=[ 873], 60.00th=[ 906], 00:15:06.777 | 70.00th=[ 955], 80.00th=[ 1029], 90.00th=[ 1139], 95.00th=[ 1270], 00:15:06.777 | 99.00th=[ 1745], 99.50th=[ 2040], 99.90th=[ 2933], 99.95th=[ 3294], 00:15:06.777 | 99.99th=[ 6259] 00:15:06.777 bw ( KiB/s): min=233864, max=252704, per=100.00%, avg=242868.44, stdev=7075.19, samples=9 00:15:06.777 iops : min=58466, max=63176, avg=60717.11, stdev=1768.80, samples=9 00:15:06.777 lat (usec) : 500=0.15%, 750=16.20%, 1000=59.74% 00:15:06.777 lat (msec) : 2=23.38%, 4=0.51%, 10=0.02%, 20=0.01% 00:15:06.777 cpu : usr=42.68%, sys=53.12%, ctx=11, majf=0, minf=762 00:15:06.777 IO depths : 1=1.1%, 2=2.7%, 4=6.0%, 8=12.5%, 16=25.2%, 32=51.0%, >=64=1.6% 00:15:06.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.777 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.6%, >=64=0.0% 00:15:06.777 issued rwts: total=301549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.777 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:06.777 00:15:06.777 Run status group 0 (all jobs): 00:15:06.777 READ: bw=236MiB/s (247MB/s), 236MiB/s-236MiB/s (247MB/s-247MB/s), io=1178MiB (1235MB), run=5001-5001msec 00:15:06.777 ----------------------------------------------------- 00:15:06.777 Suppressions used: 00:15:06.777 count bytes template 00:15:06.777 1 11 /usr/src/fio/parse.c 00:15:06.777 1 8 libtcmalloc_minimal.so 00:15:06.777 1 904 libcrypto.so 00:15:06.777 ----------------------------------------------------- 00:15:06.777 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:06.777 04:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.777 { 00:15:06.777 "subsystems": [ 00:15:06.777 { 00:15:06.777 "subsystem": "bdev", 00:15:06.777 "config": [ 00:15:06.777 { 00:15:06.777 "params": { 00:15:06.777 "io_mechanism": "io_uring", 00:15:06.777 "conserve_cpu": true, 00:15:06.777 "filename": "/dev/nvme0n1", 00:15:06.777 "name": "xnvme_bdev" 00:15:06.777 }, 00:15:06.777 "method": "bdev_xnvme_create" 00:15:06.777 }, 00:15:06.777 { 00:15:06.777 "method": "bdev_wait_for_examine" 00:15:06.777 } 00:15:06.777 ] 00:15:06.777 } 00:15:06.777 ] 00:15:06.777 } 00:15:07.036 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:07.036 fio-3.35 00:15:07.036 Starting 1 thread 00:15:13.645 00:15:13.645 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70677: Fri Dec 6 04:05:59 2024 00:15:13.645 write: IOPS=57.3k, BW=224MiB/s (235MB/s)(1120MiB/5001msec); 0 zone resets 00:15:13.645 slat (usec): min=2, max=774, avg= 4.13, stdev= 3.65 00:15:13.645 clat (usec): min=103, max=10828, avg=957.00, stdev=270.80 00:15:13.645 lat (usec): min=106, max=10832, avg=961.13, stdev=271.38 00:15:13.645 clat percentiles (usec): 00:15:13.645 | 1.00th=[ 668], 5.00th=[ 709], 10.00th=[ 742], 20.00th=[ 783], 00:15:13.645 | 30.00th=[ 824], 40.00th=[ 865], 50.00th=[ 898], 60.00th=[ 938], 00:15:13.645 | 70.00th=[ 1004], 80.00th=[ 1090], 90.00th=[ 1221], 95.00th=[ 1401], 00:15:13.645 | 99.00th=[ 1778], 99.50th=[ 2008], 99.90th=[ 2704], 99.95th=[ 2999], 00:15:13.645 | 99.99th=[10028] 00:15:13.645 bw ( KiB/s): min=206576, max=256512, per=100.00%, avg=233790.78, stdev=18431.24, samples=9 00:15:13.645 iops : min=51644, max=64128, avg=58447.67, stdev=4607.80, samples=9 00:15:13.645 lat (usec) : 250=0.01%, 500=0.13%, 750=12.37%, 1000=56.69% 00:15:13.645 lat (msec) : 2=30.29%, 4=0.49%, 10=0.01%, 20=0.01% 00:15:13.645 cpu : usr=39.58%, sys=55.84%, ctx=26, majf=0, minf=763 00:15:13.645 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.2%, 16=24.7%, 32=50.8%, >=64=1.6% 00:15:13.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.645 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:13.645 issued rwts: total=0,286782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:13.645 00:15:13.645 Run status group 0 (all jobs): 00:15:13.645 WRITE: bw=224MiB/s (235MB/s), 224MiB/s-224MiB/s (235MB/s-235MB/s), io=1120MiB (1175MB), run=5001-5001msec 00:15:13.645 ----------------------------------------------------- 00:15:13.645 Suppressions used: 00:15:13.645 count bytes template 00:15:13.645 1 11 /usr/src/fio/parse.c 00:15:13.645 1 8 libtcmalloc_minimal.so 00:15:13.645 1 904 libcrypto.so 00:15:13.645 ----------------------------------------------------- 00:15:13.645 00:15:13.645 00:15:13.645 real 0m13.383s 00:15:13.645 user 0m6.691s 00:15:13.645 sys 0m5.940s 00:15:13.645 04:06:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.645 ************************************ 00:15:13.645 END TEST xnvme_fio_plugin 00:15:13.645 ************************************ 00:15:13.645 04:06:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:13.645 04:06:00 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:13.645 04:06:00 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:15:13.645 04:06:00 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:15:13.645 04:06:00 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:15:13.645 04:06:00 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:13.645 04:06:00 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:13.645 04:06:00 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:13.645 04:06:00 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:13.645 04:06:00 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:13.645 04:06:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:13.645 04:06:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.645 04:06:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.645 ************************************ 00:15:13.645 START TEST xnvme_rpc 00:15:13.645 ************************************ 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70759 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70759 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70759 ']' 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.645 04:06:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.645 [2024-12-06 04:06:00.925896] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:15:13.645 [2024-12-06 04:06:00.926022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70759 ] 00:15:13.645 [2024-12-06 04:06:01.084237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.645 [2024-12-06 04:06:01.169407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.578 xnvme_bdev 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.578 04:06:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70759 00:15:14.579 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70759 ']' 00:15:14.579 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70759 00:15:14.579 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:14.579 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.579 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70759 00:15:14.579 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.579 killing process with pid 70759 00:15:14.579 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.579 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70759' 00:15:14.579 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70759 00:15:14.579 04:06:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70759 00:15:15.953 00:15:15.953 real 0m2.328s 00:15:15.953 user 0m2.475s 00:15:15.953 sys 0m0.368s 00:15:15.953 04:06:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.953 04:06:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.953 ************************************ 00:15:15.953 END TEST xnvme_rpc 00:15:15.953 ************************************ 00:15:15.953 04:06:03 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:15.953 04:06:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:15.953 04:06:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.953 04:06:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.953 ************************************ 00:15:15.953 START TEST xnvme_bdevperf 00:15:15.953 ************************************ 00:15:15.953 04:06:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:15.953 04:06:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:15.953 04:06:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:15.953 04:06:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.953 04:06:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:15.953 04:06:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:15.953 04:06:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:15.953 04:06:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:15.953 { 00:15:15.953 "subsystems": [ 00:15:15.953 { 00:15:15.953 "subsystem": "bdev", 00:15:15.953 "config": [ 00:15:15.953 { 00:15:15.953 "params": { 00:15:15.953 "io_mechanism": "io_uring_cmd", 00:15:15.953 "conserve_cpu": false, 00:15:15.953 "filename": "/dev/ng0n1", 00:15:15.953 "name": "xnvme_bdev" 00:15:15.953 }, 00:15:15.953 "method": "bdev_xnvme_create" 00:15:15.953 }, 00:15:15.953 { 00:15:15.953 "method": "bdev_wait_for_examine" 00:15:15.953 } 00:15:15.953 ] 00:15:15.953 } 00:15:15.953 ] 00:15:15.953 } 00:15:15.953 [2024-12-06 04:06:03.280563] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:15:15.953 [2024-12-06 04:06:03.280656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70822 ] 00:15:15.953 [2024-12-06 04:06:03.432751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.210 [2024-12-06 04:06:03.517637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.210 Running I/O for 5 seconds... 00:15:18.512 64155.00 IOPS, 250.61 MiB/s [2024-12-06T04:06:06.973Z] 63484.00 IOPS, 247.98 MiB/s [2024-12-06T04:06:07.907Z] 64304.67 IOPS, 251.19 MiB/s [2024-12-06T04:06:08.845Z] 64160.50 IOPS, 250.63 MiB/s [2024-12-06T04:06:08.845Z] 63910.40 IOPS, 249.65 MiB/s 00:15:21.318 Latency(us) 00:15:21.318 [2024-12-06T04:06:08.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.318 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:21.318 xnvme_bdev : 5.00 63866.01 249.48 0.00 0.00 998.17 283.57 10838.65 00:15:21.318 [2024-12-06T04:06:08.845Z] =================================================================================================================== 00:15:21.318 [2024-12-06T04:06:08.845Z] Total : 63866.01 249.48 0.00 0.00 998.17 283.57 10838.65 00:15:22.251 04:06:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:22.251 04:06:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:22.251 04:06:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:22.251 04:06:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:22.251 04:06:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:22.251 { 00:15:22.251 "subsystems": [ 00:15:22.251 { 00:15:22.251 "subsystem": "bdev", 00:15:22.251 "config": [ 00:15:22.251 { 00:15:22.251 "params": { 00:15:22.251 "io_mechanism": "io_uring_cmd", 00:15:22.251 "conserve_cpu": false, 00:15:22.251 "filename": "/dev/ng0n1", 00:15:22.251 "name": "xnvme_bdev" 00:15:22.251 }, 00:15:22.251 "method": "bdev_xnvme_create" 00:15:22.251 }, 00:15:22.251 { 00:15:22.251 "method": "bdev_wait_for_examine" 00:15:22.251 } 00:15:22.251 ] 00:15:22.251 } 00:15:22.251 ] 00:15:22.251 } 00:15:22.251 [2024-12-06 04:06:09.522613] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:15:22.251 [2024-12-06 04:06:09.522743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70896 ] 00:15:22.251 [2024-12-06 04:06:09.684400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.510 [2024-12-06 04:06:09.785377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.510 Running I/O for 5 seconds... 00:15:24.819 50727.00 IOPS, 198.15 MiB/s [2024-12-06T04:06:13.280Z] 53219.50 IOPS, 207.89 MiB/s [2024-12-06T04:06:14.213Z] 54452.67 IOPS, 212.71 MiB/s [2024-12-06T04:06:15.147Z] 55195.25 IOPS, 215.61 MiB/s 00:15:27.620 Latency(us) 00:15:27.620 [2024-12-06T04:06:15.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.620 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:27.620 xnvme_bdev : 5.00 54632.04 213.41 0.00 0.00 1167.28 69.32 50009.01 00:15:27.620 [2024-12-06T04:06:15.147Z] =================================================================================================================== 00:15:27.620 [2024-12-06T04:06:15.147Z] Total : 54632.04 213.41 0.00 0.00 1167.28 69.32 50009.01 00:15:28.549 04:06:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:28.550 04:06:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:28.550 04:06:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:28.550 04:06:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:28.550 04:06:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:28.550 { 00:15:28.550 "subsystems": [ 00:15:28.550 { 00:15:28.550 "subsystem": "bdev", 00:15:28.550 "config": [ 00:15:28.550 { 00:15:28.550 "params": { 00:15:28.550 "io_mechanism": "io_uring_cmd", 00:15:28.550 "conserve_cpu": false, 00:15:28.550 "filename": "/dev/ng0n1", 00:15:28.550 "name": "xnvme_bdev" 00:15:28.550 }, 00:15:28.550 "method": "bdev_xnvme_create" 00:15:28.550 }, 00:15:28.550 { 00:15:28.550 "method": "bdev_wait_for_examine" 00:15:28.550 } 00:15:28.550 ] 00:15:28.550 } 00:15:28.550 ] 00:15:28.550 } 00:15:28.550 [2024-12-06 04:06:15.835925] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:15:28.550 [2024-12-06 04:06:15.836098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70971 ] 00:15:28.550 [2024-12-06 04:06:16.014585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.807 [2024-12-06 04:06:16.114950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.064 Running I/O for 5 seconds... 00:15:30.973 95104.00 IOPS, 371.50 MiB/s [2024-12-06T04:06:19.466Z] 95136.00 IOPS, 371.62 MiB/s [2024-12-06T04:06:20.401Z] 95424.00 IOPS, 372.75 MiB/s [2024-12-06T04:06:21.776Z] 94576.00 IOPS, 369.44 MiB/s [2024-12-06T04:06:21.776Z] 94822.40 IOPS, 370.40 MiB/s 00:15:34.249 Latency(us) 00:15:34.249 [2024-12-06T04:06:21.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.249 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:34.249 xnvme_bdev : 5.00 94775.78 370.22 0.00 0.00 671.86 494.67 2318.97 00:15:34.249 [2024-12-06T04:06:21.776Z] =================================================================================================================== 00:15:34.249 [2024-12-06T04:06:21.776Z] Total : 94775.78 370.22 0.00 0.00 671.86 494.67 2318.97 00:15:34.507 04:06:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:34.507 04:06:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:34.507 04:06:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:34.507 04:06:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:34.507 04:06:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 { 00:15:34.507 "subsystems": [ 00:15:34.507 { 00:15:34.507 "subsystem": "bdev", 00:15:34.507 "config": [ 00:15:34.507 { 00:15:34.507 "params": { 00:15:34.507 "io_mechanism": "io_uring_cmd", 00:15:34.507 "conserve_cpu": false, 00:15:34.507 "filename": "/dev/ng0n1", 00:15:34.507 "name": "xnvme_bdev" 00:15:34.507 }, 00:15:34.507 "method": "bdev_xnvme_create" 00:15:34.507 }, 00:15:34.507 { 00:15:34.507 "method": "bdev_wait_for_examine" 00:15:34.507 } 00:15:34.507 ] 00:15:34.507 } 00:15:34.507 ] 00:15:34.507 } 00:15:34.507 [2024-12-06 04:06:21.998371] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:15:34.507 [2024-12-06 04:06:21.998494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71040 ] 00:15:34.764 [2024-12-06 04:06:22.155710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.764 [2024-12-06 04:06:22.242426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.022 Running I/O for 5 seconds... 00:15:37.329 417.00 IOPS, 1.63 MiB/s [2024-12-06T04:06:25.806Z] 456.00 IOPS, 1.78 MiB/s [2024-12-06T04:06:26.738Z] 626.00 IOPS, 2.45 MiB/s [2024-12-06T04:06:27.673Z] 2012.50 IOPS, 7.86 MiB/s [2024-12-06T04:06:27.673Z] 1775.20 IOPS, 6.93 MiB/s 00:15:40.146 Latency(us) 00:15:40.146 [2024-12-06T04:06:27.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.146 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:40.146 xnvme_bdev : 5.10 1754.02 6.85 0.00 0.00 36117.64 40.57 551712.30 00:15:40.146 [2024-12-06T04:06:27.673Z] =================================================================================================================== 00:15:40.146 [2024-12-06T04:06:27.673Z] Total : 1754.02 6.85 0.00 0.00 36117.64 40.57 551712.30 00:15:40.712 00:15:40.712 real 0m24.878s 00:15:40.712 user 0m13.803s 00:15:40.712 sys 0m10.683s 00:15:40.712 04:06:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.712 04:06:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:40.712 ************************************ 00:15:40.712 END TEST xnvme_bdevperf 00:15:40.712 ************************************ 00:15:40.712 04:06:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:40.712 04:06:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:40.712 04:06:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.712 04:06:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.712 ************************************ 00:15:40.712 START TEST xnvme_fio_plugin 00:15:40.712 ************************************ 00:15:40.712 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:40.712 04:06:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:40.712 04:06:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:40.712 04:06:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:40.712 04:06:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:40.712 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:40.713 04:06:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:40.713 { 00:15:40.713 "subsystems": [ 00:15:40.713 { 00:15:40.713 "subsystem": "bdev", 00:15:40.713 "config": [ 00:15:40.713 { 00:15:40.713 "params": { 00:15:40.713 "io_mechanism": "io_uring_cmd", 00:15:40.713 "conserve_cpu": false, 00:15:40.713 "filename": "/dev/ng0n1", 00:15:40.713 "name": "xnvme_bdev" 00:15:40.713 }, 00:15:40.713 "method": "bdev_xnvme_create" 00:15:40.713 }, 00:15:40.713 { 00:15:40.713 "method": "bdev_wait_for_examine" 00:15:40.713 } 00:15:40.713 ] 00:15:40.713 } 00:15:40.713 ] 00:15:40.713 } 00:15:40.972 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:40.972 fio-3.35 00:15:40.972 Starting 1 thread 00:15:47.527 00:15:47.527 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71157: Fri Dec 6 04:06:33 2024 00:15:47.527 read: IOPS=57.9k, BW=226MiB/s (237MB/s)(1132MiB/5001msec) 00:15:47.527 slat (nsec): min=2857, max=78191, avg=3683.23, stdev=1504.72 00:15:47.527 clat (usec): min=128, max=3670, avg=960.74, stdev=309.20 00:15:47.527 lat (usec): min=133, max=3690, avg=964.42, stdev=309.57 00:15:47.527 clat percentiles (usec): 00:15:47.527 | 1.00th=[ 644], 5.00th=[ 676], 10.00th=[ 701], 20.00th=[ 742], 00:15:47.527 | 30.00th=[ 783], 40.00th=[ 816], 50.00th=[ 857], 60.00th=[ 906], 00:15:47.527 | 70.00th=[ 996], 80.00th=[ 1123], 90.00th=[ 1401], 95.00th=[ 1647], 00:15:47.527 | 99.00th=[ 2024], 99.50th=[ 2212], 99.90th=[ 2737], 99.95th=[ 2966], 00:15:47.527 | 99.99th=[ 3425] 00:15:47.527 bw ( KiB/s): min=170496, max=271872, per=100.00%, avg=234496.00, stdev=42522.21, samples=9 00:15:47.527 iops : min=42624, max=67968, avg=58624.00, stdev=10630.55, samples=9 00:15:47.527 lat (usec) : 250=0.01%, 750=22.45%, 1000=47.87% 00:15:47.527 lat (msec) : 2=28.59%, 4=1.08% 00:15:47.527 cpu : usr=39.68%, sys=59.46%, ctx=10, majf=0, minf=762 00:15:47.527 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:47.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.527 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:47.527 issued rwts: total=289668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.527 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.527 00:15:47.527 Run status group 0 (all jobs): 00:15:47.527 READ: bw=226MiB/s (237MB/s), 226MiB/s-226MiB/s (237MB/s-237MB/s), io=1132MiB (1186MB), run=5001-5001msec 00:15:47.527 ----------------------------------------------------- 00:15:47.527 Suppressions used: 00:15:47.527 count bytes template 00:15:47.527 1 11 /usr/src/fio/parse.c 00:15:47.527 1 8 libtcmalloc_minimal.so 00:15:47.527 1 904 libcrypto.so 00:15:47.527 ----------------------------------------------------- 00:15:47.527 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:47.527 04:06:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:47.527 { 00:15:47.527 "subsystems": [ 00:15:47.527 { 00:15:47.527 "subsystem": "bdev", 00:15:47.527 "config": [ 00:15:47.527 { 00:15:47.527 "params": { 00:15:47.527 "io_mechanism": "io_uring_cmd", 00:15:47.527 "conserve_cpu": false, 00:15:47.527 "filename": "/dev/ng0n1", 00:15:47.527 "name": "xnvme_bdev" 00:15:47.527 }, 00:15:47.527 "method": "bdev_xnvme_create" 00:15:47.527 }, 00:15:47.527 { 00:15:47.527 "method": "bdev_wait_for_examine" 00:15:47.527 } 00:15:47.527 ] 00:15:47.527 } 00:15:47.527 ] 00:15:47.527 } 00:15:47.527 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:47.527 fio-3.35 00:15:47.527 Starting 1 thread 00:15:54.123 00:15:54.123 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71249: Fri Dec 6 04:06:40 2024 00:15:54.123 write: IOPS=34.8k, BW=136MiB/s (143MB/s)(681MiB/5003msec); 0 zone resets 00:15:54.123 slat (nsec): min=2902, max=71423, avg=3859.98, stdev=2124.03 00:15:54.123 clat (usec): min=48, max=59189, avg=1731.32, stdev=2409.48 00:15:54.123 lat (usec): min=52, max=59192, avg=1735.18, stdev=2409.55 00:15:54.123 clat percentiles (usec): 00:15:54.123 | 1.00th=[ 212], 5.00th=[ 412], 10.00th=[ 553], 20.00th=[ 709], 00:15:54.123 | 30.00th=[ 807], 40.00th=[ 922], 50.00th=[ 1074], 60.00th=[ 1270], 00:15:54.123 | 70.00th=[ 1483], 80.00th=[ 1762], 90.00th=[ 3130], 95.00th=[ 6915], 00:15:54.123 | 99.00th=[10290], 99.50th=[11469], 99.90th=[16909], 99.95th=[46924], 00:15:54.123 | 99.99th=[55837] 00:15:54.123 bw ( KiB/s): min=107984, max=189632, per=100.00%, avg=142251.56, stdev=23484.56, samples=9 00:15:54.123 iops : min=26996, max=47408, avg=35562.89, stdev=5871.14, samples=9 00:15:54.123 lat (usec) : 50=0.01%, 100=0.17%, 250=1.28%, 500=6.31%, 750=16.14% 00:15:54.123 lat (usec) : 1000=21.24% 00:15:54.123 lat (msec) : 2=39.10%, 4=7.10%, 10=7.40%, 20=1.18%, 50=0.04% 00:15:54.123 lat (msec) : 100=0.04% 00:15:54.123 cpu : usr=36.75%, sys=62.34%, ctx=10, majf=0, minf=763 00:15:54.123 IO depths : 1=0.6%, 2=1.3%, 4=2.7%, 8=6.0%, 16=15.9%, 32=68.9%, >=64=4.6% 00:15:54.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.123 complete : 0=0.0%, 4=97.0%, 8=0.5%, 16=0.6%, 32=0.8%, 64=1.2%, >=64=0.0% 00:15:54.123 issued rwts: total=0,174284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:54.123 00:15:54.123 Run status group 0 (all jobs): 00:15:54.123 WRITE: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=681MiB (714MB), run=5003-5003msec 00:15:54.123 ----------------------------------------------------- 00:15:54.123 Suppressions used: 00:15:54.123 count bytes template 00:15:54.123 1 11 /usr/src/fio/parse.c 00:15:54.123 1 8 libtcmalloc_minimal.so 00:15:54.123 1 904 libcrypto.so 00:15:54.123 ----------------------------------------------------- 00:15:54.123 00:15:54.123 00:15:54.123 real 0m13.423s 00:15:54.123 user 0m6.434s 00:15:54.123 sys 0m6.581s 00:15:54.123 04:06:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.123 04:06:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:54.123 ************************************ 00:15:54.123 END TEST xnvme_fio_plugin 00:15:54.123 ************************************ 00:15:54.123 04:06:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:54.123 04:06:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:54.123 04:06:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:54.123 04:06:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:54.123 04:06:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:54.123 04:06:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.123 04:06:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.123 ************************************ 00:15:54.123 START TEST xnvme_rpc 00:15:54.123 ************************************ 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71329 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71329 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71329 ']' 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.123 04:06:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.381 [2024-12-06 04:06:41.700647] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:15:54.381 [2024-12-06 04:06:41.700783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71329 ] 00:15:54.381 [2024-12-06 04:06:41.861278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.639 [2024-12-06 04:06:41.963646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.206 xnvme_bdev 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.206 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:55.207 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:55.207 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.207 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.207 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.207 04:06:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71329 00:15:55.207 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71329 ']' 00:15:55.207 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71329 00:15:55.207 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:55.207 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.207 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71329 00:15:55.465 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.465 killing process with pid 71329 00:15:55.465 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.465 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71329' 00:15:55.465 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71329 00:15:55.465 04:06:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71329 00:15:56.840 00:15:56.840 real 0m2.649s 00:15:56.840 user 0m2.766s 00:15:56.840 sys 0m0.341s 00:15:56.840 04:06:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.840 ************************************ 00:15:56.840 END TEST xnvme_rpc 00:15:56.840 ************************************ 00:15:56.840 04:06:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.840 04:06:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:56.840 04:06:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:56.840 04:06:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.840 04:06:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:56.840 ************************************ 00:15:56.840 START TEST xnvme_bdevperf 00:15:56.840 ************************************ 00:15:56.840 04:06:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:56.840 04:06:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:56.840 04:06:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:56.840 04:06:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:56.840 04:06:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:56.840 04:06:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:56.840 04:06:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:56.840 04:06:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:57.100 { 00:15:57.100 "subsystems": [ 00:15:57.100 { 00:15:57.100 "subsystem": "bdev", 00:15:57.100 "config": [ 00:15:57.100 { 00:15:57.100 "params": { 00:15:57.100 "io_mechanism": "io_uring_cmd", 00:15:57.100 "conserve_cpu": true, 00:15:57.100 "filename": "/dev/ng0n1", 00:15:57.100 "name": "xnvme_bdev" 00:15:57.100 }, 00:15:57.100 "method": "bdev_xnvme_create" 00:15:57.100 }, 00:15:57.100 { 00:15:57.100 "method": "bdev_wait_for_examine" 00:15:57.100 } 00:15:57.100 ] 00:15:57.100 } 00:15:57.100 ] 00:15:57.100 } 00:15:57.100 [2024-12-06 04:06:44.420222] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:15:57.100 [2024-12-06 04:06:44.420409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71397 ] 00:15:57.100 [2024-12-06 04:06:44.595945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.360 [2024-12-06 04:06:44.704148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.619 Running I/O for 5 seconds... 00:15:59.501 38656.00 IOPS, 151.00 MiB/s [2024-12-06T04:06:48.413Z] 38588.50 IOPS, 150.74 MiB/s [2024-12-06T04:06:49.346Z] 36776.33 IOPS, 143.66 MiB/s [2024-12-06T04:06:50.280Z] 37150.25 IOPS, 145.12 MiB/s 00:16:02.753 Latency(us) 00:16:02.753 [2024-12-06T04:06:50.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.753 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:02.753 xnvme_bdev : 5.00 37239.91 145.47 0.00 0.00 1714.02 683.72 4763.96 00:16:02.753 [2024-12-06T04:06:50.280Z] =================================================================================================================== 00:16:02.753 [2024-12-06T04:06:50.280Z] Total : 37239.91 145.47 0.00 0.00 1714.02 683.72 4763.96 00:16:03.319 04:06:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:03.319 04:06:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:03.319 04:06:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:03.319 04:06:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:03.319 04:06:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:03.319 { 00:16:03.319 "subsystems": [ 00:16:03.319 { 00:16:03.319 "subsystem": "bdev", 00:16:03.319 "config": [ 00:16:03.319 { 00:16:03.319 "params": { 00:16:03.319 "io_mechanism": "io_uring_cmd", 00:16:03.319 "conserve_cpu": true, 00:16:03.319 "filename": "/dev/ng0n1", 00:16:03.319 "name": "xnvme_bdev" 00:16:03.319 }, 00:16:03.319 "method": "bdev_xnvme_create" 00:16:03.319 }, 00:16:03.319 { 00:16:03.319 "method": "bdev_wait_for_examine" 00:16:03.319 } 00:16:03.319 ] 00:16:03.319 } 00:16:03.319 ] 00:16:03.319 } 00:16:03.319 [2024-12-06 04:06:50.819662] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:16:03.319 [2024-12-06 04:06:50.819791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71476 ] 00:16:03.578 [2024-12-06 04:06:50.978317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.578 [2024-12-06 04:06:51.086101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.836 Running I/O for 5 seconds... 00:16:06.156 38903.00 IOPS, 151.96 MiB/s [2024-12-06T04:06:54.620Z] 38846.50 IOPS, 151.74 MiB/s [2024-12-06T04:06:55.557Z] 37792.67 IOPS, 147.63 MiB/s [2024-12-06T04:06:56.498Z] 37556.75 IOPS, 146.71 MiB/s [2024-12-06T04:06:56.498Z] 37288.20 IOPS, 145.66 MiB/s 00:16:08.971 Latency(us) 00:16:08.971 [2024-12-06T04:06:56.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.971 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:08.971 xnvme_bdev : 5.00 37275.78 145.61 0.00 0.00 1712.05 661.66 5142.06 00:16:08.971 [2024-12-06T04:06:56.498Z] =================================================================================================================== 00:16:08.971 [2024-12-06T04:06:56.498Z] Total : 37275.78 145.61 0.00 0.00 1712.05 661.66 5142.06 00:16:09.907 04:06:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:09.907 04:06:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:09.907 04:06:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:09.907 04:06:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:09.907 04:06:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:09.907 { 00:16:09.907 "subsystems": [ 00:16:09.907 { 00:16:09.907 "subsystem": "bdev", 00:16:09.907 "config": [ 00:16:09.907 { 00:16:09.907 "params": { 00:16:09.907 "io_mechanism": "io_uring_cmd", 00:16:09.907 "conserve_cpu": true, 00:16:09.907 "filename": "/dev/ng0n1", 00:16:09.907 "name": "xnvme_bdev" 00:16:09.907 }, 00:16:09.907 "method": "bdev_xnvme_create" 00:16:09.907 }, 00:16:09.907 { 00:16:09.907 "method": "bdev_wait_for_examine" 00:16:09.907 } 00:16:09.907 ] 00:16:09.907 } 00:16:09.907 ] 00:16:09.907 } 00:16:09.907 [2024-12-06 04:06:57.301523] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:16:09.907 [2024-12-06 04:06:57.301645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71547 ] 00:16:10.164 [2024-12-06 04:06:57.464141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.164 [2024-12-06 04:06:57.569850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.420 Running I/O for 5 seconds... 00:16:12.719 80960.00 IOPS, 316.25 MiB/s [2024-12-06T04:07:01.179Z] 79456.00 IOPS, 310.38 MiB/s [2024-12-06T04:07:02.113Z] 79274.67 IOPS, 309.67 MiB/s [2024-12-06T04:07:03.047Z] 79568.00 IOPS, 310.81 MiB/s 00:16:15.520 Latency(us) 00:16:15.520 [2024-12-06T04:07:03.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.520 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:15.520 xnvme_bdev : 5.00 79671.98 311.22 0.00 0.00 799.80 419.05 3201.18 00:16:15.520 [2024-12-06T04:07:03.047Z] =================================================================================================================== 00:16:15.520 [2024-12-06T04:07:03.047Z] Total : 79671.98 311.22 0.00 0.00 799.80 419.05 3201.18 00:16:16.107 04:07:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:16.107 04:07:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:16.107 04:07:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:16.108 04:07:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:16.108 04:07:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:16.108 { 00:16:16.108 "subsystems": [ 00:16:16.108 { 00:16:16.108 "subsystem": "bdev", 00:16:16.108 "config": [ 00:16:16.108 { 00:16:16.108 "params": { 00:16:16.108 "io_mechanism": "io_uring_cmd", 00:16:16.108 "conserve_cpu": true, 00:16:16.108 "filename": "/dev/ng0n1", 00:16:16.108 "name": "xnvme_bdev" 00:16:16.108 }, 00:16:16.108 "method": "bdev_xnvme_create" 00:16:16.108 }, 00:16:16.108 { 00:16:16.108 "method": "bdev_wait_for_examine" 00:16:16.108 } 00:16:16.108 ] 00:16:16.108 } 00:16:16.108 ] 00:16:16.108 } 00:16:16.108 [2024-12-06 04:07:03.619891] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:16:16.108 [2024-12-06 04:07:03.620015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71620 ] 00:16:16.365 [2024-12-06 04:07:03.780888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.365 [2024-12-06 04:07:03.880507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.630 Running I/O for 5 seconds... 00:16:18.946 56065.00 IOPS, 219.00 MiB/s [2024-12-06T04:07:07.405Z] 53424.50 IOPS, 208.69 MiB/s [2024-12-06T04:07:08.339Z] 40091.33 IOPS, 156.61 MiB/s [2024-12-06T04:07:09.273Z] 42530.75 IOPS, 166.14 MiB/s [2024-12-06T04:07:09.273Z] 36374.40 IOPS, 142.09 MiB/s 00:16:21.746 Latency(us) 00:16:21.746 [2024-12-06T04:07:09.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.746 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:21.746 xnvme_bdev : 5.00 36354.11 142.01 0.00 0.00 1754.53 43.52 777559.43 00:16:21.746 [2024-12-06T04:07:09.273Z] =================================================================================================================== 00:16:21.746 [2024-12-06T04:07:09.273Z] Total : 36354.11 142.01 0.00 0.00 1754.53 43.52 777559.43 00:16:22.683 00:16:22.683 real 0m25.516s 00:16:22.683 user 0m17.765s 00:16:22.683 sys 0m6.054s 00:16:22.683 04:07:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.683 04:07:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:22.683 ************************************ 00:16:22.683 END TEST xnvme_bdevperf 00:16:22.683 ************************************ 00:16:22.683 04:07:09 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:22.683 04:07:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:22.683 04:07:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.684 04:07:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:22.684 ************************************ 00:16:22.684 START TEST xnvme_fio_plugin 00:16:22.684 ************************************ 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:22.684 04:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:22.684 { 00:16:22.684 "subsystems": [ 00:16:22.684 { 00:16:22.684 "subsystem": "bdev", 00:16:22.684 "config": [ 00:16:22.684 { 00:16:22.684 "params": { 00:16:22.684 "io_mechanism": "io_uring_cmd", 00:16:22.684 "conserve_cpu": true, 00:16:22.684 "filename": "/dev/ng0n1", 00:16:22.684 "name": "xnvme_bdev" 00:16:22.684 }, 00:16:22.684 "method": "bdev_xnvme_create" 00:16:22.684 }, 00:16:22.684 { 00:16:22.684 "method": "bdev_wait_for_examine" 00:16:22.684 } 00:16:22.684 ] 00:16:22.684 } 00:16:22.684 ] 00:16:22.684 } 00:16:22.684 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:22.684 fio-3.35 00:16:22.684 Starting 1 thread 00:16:29.240 00:16:29.240 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71733: Fri Dec 6 04:07:15 2024 00:16:29.240 read: IOPS=60.5k, BW=236MiB/s (248MB/s)(1181MiB/5001msec) 00:16:29.240 slat (nsec): min=2863, max=59135, avg=3786.98, stdev=1368.00 00:16:29.240 clat (usec): min=389, max=2236, avg=907.26, stdev=167.27 00:16:29.240 lat (usec): min=393, max=2247, avg=911.05, stdev=167.59 00:16:29.240 clat percentiles (usec): 00:16:29.240 | 1.00th=[ 676], 5.00th=[ 709], 10.00th=[ 734], 20.00th=[ 775], 00:16:29.240 | 30.00th=[ 807], 40.00th=[ 840], 50.00th=[ 873], 60.00th=[ 914], 00:16:29.240 | 70.00th=[ 955], 80.00th=[ 1020], 90.00th=[ 1106], 95.00th=[ 1205], 00:16:29.240 | 99.00th=[ 1483], 99.50th=[ 1565], 99.90th=[ 1942], 99.95th=[ 2057], 00:16:29.240 | 99.99th=[ 2147] 00:16:29.240 bw ( KiB/s): min=210011, max=253440, per=99.39%, avg=240365.67, stdev=15219.75, samples=9 00:16:29.240 iops : min=52502, max=63360, avg=60091.33, stdev=3805.13, samples=9 00:16:29.240 lat (usec) : 500=0.01%, 750=14.49%, 1000=62.10% 00:16:29.240 lat (msec) : 2=23.33%, 4=0.07% 00:16:29.240 cpu : usr=42.12%, sys=55.32%, ctx=10, majf=0, minf=762 00:16:29.240 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:29.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.240 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:29.240 issued rwts: total=302368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:29.240 00:16:29.240 Run status group 0 (all jobs): 00:16:29.240 READ: bw=236MiB/s (248MB/s), 236MiB/s-236MiB/s (248MB/s-248MB/s), io=1181MiB (1238MB), run=5001-5001msec 00:16:29.240 ----------------------------------------------------- 00:16:29.240 Suppressions used: 00:16:29.240 count bytes template 00:16:29.240 1 11 /usr/src/fio/parse.c 00:16:29.240 1 8 libtcmalloc_minimal.so 00:16:29.240 1 904 libcrypto.so 00:16:29.240 ----------------------------------------------------- 00:16:29.240 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:29.240 04:07:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:29.240 { 00:16:29.240 "subsystems": [ 00:16:29.240 { 00:16:29.240 "subsystem": "bdev", 00:16:29.240 "config": [ 00:16:29.240 { 00:16:29.240 "params": { 00:16:29.240 "io_mechanism": "io_uring_cmd", 00:16:29.240 "conserve_cpu": true, 00:16:29.240 "filename": "/dev/ng0n1", 00:16:29.240 "name": "xnvme_bdev" 00:16:29.240 }, 00:16:29.240 "method": "bdev_xnvme_create" 00:16:29.240 }, 00:16:29.240 { 00:16:29.240 "method": "bdev_wait_for_examine" 00:16:29.240 } 00:16:29.240 ] 00:16:29.240 } 00:16:29.240 ] 00:16:29.240 } 00:16:29.498 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:29.498 fio-3.35 00:16:29.498 Starting 1 thread 00:16:36.171 00:16:36.171 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71829: Fri Dec 6 04:07:22 2024 00:16:36.171 write: IOPS=56.0k, BW=219MiB/s (229MB/s)(1094MiB/5001msec); 0 zone resets 00:16:36.171 slat (nsec): min=2233, max=99911, avg=4458.48, stdev=2324.85 00:16:36.171 clat (usec): min=586, max=2660, avg=966.57, stdev=206.64 00:16:36.171 lat (usec): min=589, max=2701, avg=971.03, stdev=207.84 00:16:36.171 clat percentiles (usec): 00:16:36.171 | 1.00th=[ 685], 5.00th=[ 717], 10.00th=[ 742], 20.00th=[ 791], 00:16:36.171 | 30.00th=[ 840], 40.00th=[ 881], 50.00th=[ 922], 60.00th=[ 971], 00:16:36.171 | 70.00th=[ 1045], 80.00th=[ 1123], 90.00th=[ 1237], 95.00th=[ 1369], 00:16:36.171 | 99.00th=[ 1614], 99.50th=[ 1696], 99.90th=[ 1991], 99.95th=[ 2114], 00:16:36.171 | 99.99th=[ 2442] 00:16:36.171 bw ( KiB/s): min=217600, max=227840, per=100.00%, avg=224654.22, stdev=3048.20, samples=9 00:16:36.171 iops : min=54400, max=56960, avg=56163.56, stdev=762.05, samples=9 00:16:36.171 lat (usec) : 750=10.90%, 1000=53.39% 00:16:36.171 lat (msec) : 2=35.62%, 4=0.09% 00:16:36.171 cpu : usr=44.68%, sys=52.64%, ctx=12, majf=0, minf=763 00:16:36.171 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:36.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.171 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:36.171 issued rwts: total=0,280000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.171 00:16:36.171 Run status group 0 (all jobs): 00:16:36.171 WRITE: bw=219MiB/s (229MB/s), 219MiB/s-219MiB/s (229MB/s-229MB/s), io=1094MiB (1147MB), run=5001-5001msec 00:16:36.171 ----------------------------------------------------- 00:16:36.171 Suppressions used: 00:16:36.171 count bytes template 00:16:36.171 1 11 /usr/src/fio/parse.c 00:16:36.171 1 8 libtcmalloc_minimal.so 00:16:36.171 1 904 libcrypto.so 00:16:36.171 ----------------------------------------------------- 00:16:36.171 00:16:36.171 ************************************ 00:16:36.171 END TEST xnvme_fio_plugin 00:16:36.171 ************************************ 00:16:36.171 00:16:36.171 real 0m13.483s 00:16:36.171 user 0m6.997s 00:16:36.171 sys 0m5.905s 00:16:36.171 04:07:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.171 04:07:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:36.171 04:07:23 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71329 00:16:36.171 04:07:23 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71329 ']' 00:16:36.171 04:07:23 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71329 00:16:36.171 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71329) - No such process 00:16:36.171 Process with pid 71329 is not found 00:16:36.171 04:07:23 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71329 is not found' 00:16:36.171 00:16:36.171 real 3m27.216s 00:16:36.171 user 1m55.098s 00:16:36.171 sys 1m17.791s 00:16:36.171 ************************************ 00:16:36.171 END TEST nvme_xnvme 00:16:36.171 ************************************ 00:16:36.171 04:07:23 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.171 04:07:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:36.171 04:07:23 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:36.171 04:07:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:36.171 04:07:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.171 04:07:23 -- common/autotest_common.sh@10 -- # set +x 00:16:36.171 ************************************ 00:16:36.171 START TEST blockdev_xnvme 00:16:36.171 ************************************ 00:16:36.171 04:07:23 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:36.171 * Looking for test storage... 00:16:36.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:36.171 04:07:23 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:36.171 04:07:23 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:36.171 04:07:23 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:36.171 04:07:23 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:36.171 04:07:23 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:16:36.171 04:07:23 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.171 04:07:23 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:36.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.172 --rc genhtml_branch_coverage=1 00:16:36.172 --rc genhtml_function_coverage=1 00:16:36.172 --rc genhtml_legend=1 00:16:36.172 --rc geninfo_all_blocks=1 00:16:36.172 --rc geninfo_unexecuted_blocks=1 00:16:36.172 00:16:36.172 ' 00:16:36.172 04:07:23 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:36.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.172 --rc genhtml_branch_coverage=1 00:16:36.172 --rc genhtml_function_coverage=1 00:16:36.172 --rc genhtml_legend=1 00:16:36.172 --rc geninfo_all_blocks=1 00:16:36.172 --rc geninfo_unexecuted_blocks=1 00:16:36.172 00:16:36.172 ' 00:16:36.172 04:07:23 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:36.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.172 --rc genhtml_branch_coverage=1 00:16:36.172 --rc genhtml_function_coverage=1 00:16:36.172 --rc genhtml_legend=1 00:16:36.172 --rc geninfo_all_blocks=1 00:16:36.172 --rc geninfo_unexecuted_blocks=1 00:16:36.172 00:16:36.172 ' 00:16:36.172 04:07:23 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:36.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.172 --rc genhtml_branch_coverage=1 00:16:36.172 --rc genhtml_function_coverage=1 00:16:36.172 --rc genhtml_legend=1 00:16:36.172 --rc geninfo_all_blocks=1 00:16:36.172 --rc geninfo_unexecuted_blocks=1 00:16:36.172 00:16:36.172 ' 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71958 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:36.172 04:07:23 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71958 00:16:36.172 04:07:23 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71958 ']' 00:16:36.172 04:07:23 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.172 04:07:23 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.172 04:07:23 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.172 04:07:23 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.172 04:07:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:36.172 [2024-12-06 04:07:23.666600] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:16:36.172 [2024-12-06 04:07:23.666866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71958 ] 00:16:36.433 [2024-12-06 04:07:23.817390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.433 [2024-12-06 04:07:23.905122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.001 04:07:24 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.001 04:07:24 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:16:37.001 04:07:24 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:16:37.001 04:07:24 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:16:37.001 04:07:24 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:37.001 04:07:24 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:37.001 04:07:24 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:37.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:37.824 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:37.824 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:37.824 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:16:37.824 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:37.824 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.824 04:07:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.825 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:16:37.825 nvme0n1 00:16:37.825 nvme0n2 00:16:38.082 nvme0n3 00:16:38.082 nvme1n1 00:16:38.082 nvme2n1 00:16:38.082 nvme3n1 00:16:38.082 04:07:25 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.082 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:16:38.082 04:07:25 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.082 04:07:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.082 04:07:25 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.082 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:16:38.082 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:16:38.082 04:07:25 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.082 04:07:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.082 04:07:25 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.082 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:16:38.082 04:07:25 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.082 04:07:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.082 04:07:25 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.082 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.083 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:16:38.083 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:16:38.083 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.083 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:16:38.083 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:16:38.083 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1cf08e97-9499-4c8e-8b83-16776b5c5fa1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1cf08e97-9499-4c8e-8b83-16776b5c5fa1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "a318db4e-b674-43ec-8529-8000cbf9446c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a318db4e-b674-43ec-8529-8000cbf9446c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "1fb900db-32e4-47e2-b129-76a62ceac15f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1fb900db-32e4-47e2-b129-76a62ceac15f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d26ae0a1-f88a-4246-ad47-f9e72cec15e1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d26ae0a1-f88a-4246-ad47-f9e72cec15e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "1f61c749-ec94-479f-8fde-12280016432e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1f61c749-ec94-479f-8fde-12280016432e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b92a2470-272c-41ef-bb19-d2e487d23cc7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b92a2470-272c-41ef-bb19-d2e487d23cc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:38.083 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:16:38.083 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:16:38.083 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:16:38.083 04:07:25 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 71958 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71958 ']' 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71958 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71958 00:16:38.083 killing process with pid 71958 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71958' 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71958 00:16:38.083 04:07:25 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71958 00:16:39.983 04:07:27 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:39.983 04:07:27 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:39.983 04:07:27 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:39.983 04:07:27 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.983 04:07:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:39.983 ************************************ 00:16:39.983 START TEST bdev_hello_world 00:16:39.983 ************************************ 00:16:39.983 04:07:27 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:39.983 [2024-12-06 04:07:27.132287] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:16:39.983 [2024-12-06 04:07:27.132410] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72231 ] 00:16:39.983 [2024-12-06 04:07:27.293571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.983 [2024-12-06 04:07:27.393579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.241 [2024-12-06 04:07:27.734349] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:40.241 [2024-12-06 04:07:27.734588] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:40.241 [2024-12-06 04:07:27.734616] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:40.241 [2024-12-06 04:07:27.736535] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:40.241 [2024-12-06 04:07:27.737138] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:40.241 [2024-12-06 04:07:27.737175] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:40.241 [2024-12-06 04:07:27.737337] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:40.241 00:16:40.241 [2024-12-06 04:07:27.737357] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:41.175 00:16:41.175 real 0m1.389s 00:16:41.175 user 0m1.096s 00:16:41.175 sys 0m0.178s 00:16:41.175 04:07:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.175 ************************************ 00:16:41.175 END TEST bdev_hello_world 00:16:41.175 ************************************ 00:16:41.175 04:07:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:41.175 04:07:28 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:16:41.175 04:07:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:41.175 04:07:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.175 04:07:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.175 ************************************ 00:16:41.175 START TEST bdev_bounds 00:16:41.175 ************************************ 00:16:41.175 04:07:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:41.175 Process bdevio pid: 72273 00:16:41.175 04:07:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72273 00:16:41.175 04:07:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:41.175 04:07:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:41.175 04:07:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72273' 00:16:41.175 04:07:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72273 00:16:41.175 04:07:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72273 ']' 00:16:41.175 04:07:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.175 04:07:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.175 04:07:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.175 04:07:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.175 04:07:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:41.175 [2024-12-06 04:07:28.567741] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:16:41.175 [2024-12-06 04:07:28.568105] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72273 ] 00:16:41.433 [2024-12-06 04:07:28.729974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:41.433 [2024-12-06 04:07:28.833440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.433 [2024-12-06 04:07:28.833831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.433 [2024-12-06 04:07:28.833835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.001 04:07:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.001 04:07:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:42.001 04:07:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:42.001 I/O targets: 00:16:42.001 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:42.001 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:42.001 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:42.001 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:42.001 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:42.001 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:42.001 00:16:42.001 00:16:42.001 CUnit - A unit testing framework for C - Version 2.1-3 00:16:42.001 http://cunit.sourceforge.net/ 00:16:42.001 00:16:42.001 00:16:42.001 Suite: bdevio tests on: nvme3n1 00:16:42.001 Test: blockdev write read block ...passed 00:16:42.001 Test: blockdev write zeroes read block ...passed 00:16:42.001 Test: blockdev write zeroes read no split ...passed 00:16:42.001 Test: blockdev write zeroes read split ...passed 00:16:42.001 Test: blockdev write zeroes read split partial ...passed 00:16:42.001 Test: blockdev reset ...passed 00:16:42.001 Test: blockdev write read 8 blocks ...passed 00:16:42.001 Test: blockdev write read size > 128k ...passed 00:16:42.001 Test: blockdev write read invalid size ...passed 00:16:42.001 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:42.001 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:42.001 Test: blockdev write read max offset ...passed 00:16:42.001 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:42.001 Test: blockdev writev readv 8 blocks ...passed 00:16:42.001 Test: blockdev writev readv 30 x 1block ...passed 00:16:42.001 Test: blockdev writev readv block ...passed 00:16:42.001 Test: blockdev writev readv size > 128k ...passed 00:16:42.001 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:42.001 Test: blockdev comparev and writev ...passed 00:16:42.001 Test: blockdev nvme passthru rw ...passed 00:16:42.001 Test: blockdev nvme passthru vendor specific ...passed 00:16:42.001 Test: blockdev nvme admin passthru ...passed 00:16:42.001 Test: blockdev copy ...passed 00:16:42.001 Suite: bdevio tests on: nvme2n1 00:16:42.001 Test: blockdev write read block ...passed 00:16:42.001 Test: blockdev write zeroes read block ...passed 00:16:42.001 Test: blockdev write zeroes read no split ...passed 00:16:42.001 Test: blockdev write zeroes read split ...passed 00:16:42.260 Test: blockdev write zeroes read split partial ...passed 00:16:42.260 Test: blockdev reset ...passed 00:16:42.260 Test: blockdev write read 8 blocks ...passed 00:16:42.260 Test: blockdev write read size > 128k ...passed 00:16:42.260 Test: blockdev write read invalid size ...passed 00:16:42.260 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:42.260 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:42.260 Test: blockdev write read max offset ...passed 00:16:42.260 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:42.260 Test: blockdev writev readv 8 blocks ...passed 00:16:42.260 Test: blockdev writev readv 30 x 1block ...passed 00:16:42.260 Test: blockdev writev readv block ...passed 00:16:42.260 Test: blockdev writev readv size > 128k ...passed 00:16:42.260 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:42.260 Test: blockdev comparev and writev ...passed 00:16:42.260 Test: blockdev nvme passthru rw ...passed 00:16:42.260 Test: blockdev nvme passthru vendor specific ...passed 00:16:42.260 Test: blockdev nvme admin passthru ...passed 00:16:42.260 Test: blockdev copy ...passed 00:16:42.260 Suite: bdevio tests on: nvme1n1 00:16:42.260 Test: blockdev write read block ...passed 00:16:42.260 Test: blockdev write zeroes read block ...passed 00:16:42.260 Test: blockdev write zeroes read no split ...passed 00:16:42.260 Test: blockdev write zeroes read split ...passed 00:16:42.260 Test: blockdev write zeroes read split partial ...passed 00:16:42.260 Test: blockdev reset ...passed 00:16:42.260 Test: blockdev write read 8 blocks ...passed 00:16:42.260 Test: blockdev write read size > 128k ...passed 00:16:42.260 Test: blockdev write read invalid size ...passed 00:16:42.260 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:42.260 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:42.260 Test: blockdev write read max offset ...passed 00:16:42.260 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:42.260 Test: blockdev writev readv 8 blocks ...passed 00:16:42.260 Test: blockdev writev readv 30 x 1block ...passed 00:16:42.260 Test: blockdev writev readv block ...passed 00:16:42.261 Test: blockdev writev readv size > 128k ...passed 00:16:42.261 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:42.261 Test: blockdev comparev and writev ...passed 00:16:42.261 Test: blockdev nvme passthru rw ...passed 00:16:42.261 Test: blockdev nvme passthru vendor specific ...passed 00:16:42.261 Test: blockdev nvme admin passthru ...passed 00:16:42.261 Test: blockdev copy ...passed 00:16:42.261 Suite: bdevio tests on: nvme0n3 00:16:42.261 Test: blockdev write read block ...passed 00:16:42.261 Test: blockdev write zeroes read block ...passed 00:16:42.261 Test: blockdev write zeroes read no split ...passed 00:16:42.261 Test: blockdev write zeroes read split ...passed 00:16:42.261 Test: blockdev write zeroes read split partial ...passed 00:16:42.261 Test: blockdev reset ...passed 00:16:42.261 Test: blockdev write read 8 blocks ...passed 00:16:42.261 Test: blockdev write read size > 128k ...passed 00:16:42.261 Test: blockdev write read invalid size ...passed 00:16:42.261 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:42.261 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:42.261 Test: blockdev write read max offset ...passed 00:16:42.261 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:42.261 Test: blockdev writev readv 8 blocks ...passed 00:16:42.261 Test: blockdev writev readv 30 x 1block ...passed 00:16:42.261 Test: blockdev writev readv block ...passed 00:16:42.261 Test: blockdev writev readv size > 128k ...passed 00:16:42.261 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:42.261 Test: blockdev comparev and writev ...passed 00:16:42.261 Test: blockdev nvme passthru rw ...passed 00:16:42.261 Test: blockdev nvme passthru vendor specific ...passed 00:16:42.261 Test: blockdev nvme admin passthru ...passed 00:16:42.261 Test: blockdev copy ...passed 00:16:42.261 Suite: bdevio tests on: nvme0n2 00:16:42.261 Test: blockdev write read block ...passed 00:16:42.261 Test: blockdev write zeroes read block ...passed 00:16:42.261 Test: blockdev write zeroes read no split ...passed 00:16:42.261 Test: blockdev write zeroes read split ...passed 00:16:42.261 Test: blockdev write zeroes read split partial ...passed 00:16:42.261 Test: blockdev reset ...passed 00:16:42.261 Test: blockdev write read 8 blocks ...passed 00:16:42.261 Test: blockdev write read size > 128k ...passed 00:16:42.261 Test: blockdev write read invalid size ...passed 00:16:42.261 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:42.261 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:42.261 Test: blockdev write read max offset ...passed 00:16:42.261 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:42.261 Test: blockdev writev readv 8 blocks ...passed 00:16:42.261 Test: blockdev writev readv 30 x 1block ...passed 00:16:42.261 Test: blockdev writev readv block ...passed 00:16:42.261 Test: blockdev writev readv size > 128k ...passed 00:16:42.261 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:42.261 Test: blockdev comparev and writev ...passed 00:16:42.261 Test: blockdev nvme passthru rw ...passed 00:16:42.261 Test: blockdev nvme passthru vendor specific ...passed 00:16:42.261 Test: blockdev nvme admin passthru ...passed 00:16:42.261 Test: blockdev copy ...passed 00:16:42.261 Suite: bdevio tests on: nvme0n1 00:16:42.261 Test: blockdev write read block ...passed 00:16:42.261 Test: blockdev write zeroes read block ...passed 00:16:42.261 Test: blockdev write zeroes read no split ...passed 00:16:42.261 Test: blockdev write zeroes read split ...passed 00:16:42.261 Test: blockdev write zeroes read split partial ...passed 00:16:42.261 Test: blockdev reset ...passed 00:16:42.261 Test: blockdev write read 8 blocks ...passed 00:16:42.261 Test: blockdev write read size > 128k ...passed 00:16:42.261 Test: blockdev write read invalid size ...passed 00:16:42.261 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:42.261 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:42.261 Test: blockdev write read max offset ...passed 00:16:42.261 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:42.261 Test: blockdev writev readv 8 blocks ...passed 00:16:42.261 Test: blockdev writev readv 30 x 1block ...passed 00:16:42.261 Test: blockdev writev readv block ...passed 00:16:42.261 Test: blockdev writev readv size > 128k ...passed 00:16:42.261 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:42.261 Test: blockdev comparev and writev ...passed 00:16:42.261 Test: blockdev nvme passthru rw ...passed 00:16:42.261 Test: blockdev nvme passthru vendor specific ...passed 00:16:42.261 Test: blockdev nvme admin passthru ...passed 00:16:42.261 Test: blockdev copy ...passed 00:16:42.261 00:16:42.261 Run Summary: Type Total Ran Passed Failed Inactive 00:16:42.261 suites 6 6 n/a 0 0 00:16:42.261 tests 138 138 138 0 0 00:16:42.261 asserts 780 780 780 0 n/a 00:16:42.261 00:16:42.261 Elapsed time = 0.900 seconds 00:16:42.261 0 00:16:42.261 04:07:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72273 00:16:42.261 04:07:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72273 ']' 00:16:42.261 04:07:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72273 00:16:42.261 04:07:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:42.261 04:07:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:42.261 04:07:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72273 00:16:42.520 killing process with pid 72273 00:16:42.520 04:07:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:42.520 04:07:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:42.520 04:07:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72273' 00:16:42.520 04:07:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72273 00:16:42.520 04:07:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72273 00:16:43.098 ************************************ 00:16:43.098 END TEST bdev_bounds 00:16:43.098 ************************************ 00:16:43.098 04:07:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:43.098 00:16:43.098 real 0m1.889s 00:16:43.098 user 0m4.611s 00:16:43.098 sys 0m0.293s 00:16:43.098 04:07:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.098 04:07:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:43.098 04:07:30 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:43.098 04:07:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:43.098 04:07:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.098 04:07:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.098 ************************************ 00:16:43.098 START TEST bdev_nbd 00:16:43.098 ************************************ 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72327 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:43.098 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72327 /var/tmp/spdk-nbd.sock 00:16:43.099 04:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72327 ']' 00:16:43.099 04:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:43.099 04:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.099 04:07:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:43.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:43.099 04:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:43.099 04:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.099 04:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:43.099 [2024-12-06 04:07:30.487478] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:16:43.099 [2024-12-06 04:07:30.487578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.357 [2024-12-06 04:07:30.644684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.357 [2024-12-06 04:07:30.748347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:43.924 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.183 1+0 records in 00:16:44.183 1+0 records out 00:16:44.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398057 s, 10.3 MB/s 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:44.183 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.441 1+0 records in 00:16:44.441 1+0 records out 00:16:44.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487995 s, 8.4 MB/s 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.441 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:44.442 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.442 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:44.442 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:44.442 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:44.442 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:44.442 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:16:44.442 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:44.442 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:44.442 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:44.442 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.701 1+0 records in 00:16:44.701 1+0 records out 00:16:44.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386294 s, 10.6 MB/s 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:44.701 04:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.701 1+0 records in 00:16:44.701 1+0 records out 00:16:44.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039423 s, 10.4 MB/s 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:44.701 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.960 1+0 records in 00:16:44.960 1+0 records out 00:16:44.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037602 s, 10.9 MB/s 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:44.960 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:45.218 1+0 records in 00:16:45.218 1+0 records out 00:16:45.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568286 s, 7.2 MB/s 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:45.218 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:45.475 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:45.475 { 00:16:45.475 "nbd_device": "/dev/nbd0", 00:16:45.475 "bdev_name": "nvme0n1" 00:16:45.475 }, 00:16:45.475 { 00:16:45.475 "nbd_device": "/dev/nbd1", 00:16:45.475 "bdev_name": "nvme0n2" 00:16:45.475 }, 00:16:45.475 { 00:16:45.475 "nbd_device": "/dev/nbd2", 00:16:45.475 "bdev_name": "nvme0n3" 00:16:45.475 }, 00:16:45.475 { 00:16:45.476 "nbd_device": "/dev/nbd3", 00:16:45.476 "bdev_name": "nvme1n1" 00:16:45.476 }, 00:16:45.476 { 00:16:45.476 "nbd_device": "/dev/nbd4", 00:16:45.476 "bdev_name": "nvme2n1" 00:16:45.476 }, 00:16:45.476 { 00:16:45.476 "nbd_device": "/dev/nbd5", 00:16:45.476 "bdev_name": "nvme3n1" 00:16:45.476 } 00:16:45.476 ]' 00:16:45.476 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:45.476 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:45.476 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:45.476 { 00:16:45.476 "nbd_device": "/dev/nbd0", 00:16:45.476 "bdev_name": "nvme0n1" 00:16:45.476 }, 00:16:45.476 { 00:16:45.476 "nbd_device": "/dev/nbd1", 00:16:45.476 "bdev_name": "nvme0n2" 00:16:45.476 }, 00:16:45.476 { 00:16:45.476 "nbd_device": "/dev/nbd2", 00:16:45.476 "bdev_name": "nvme0n3" 00:16:45.476 }, 00:16:45.476 { 00:16:45.476 "nbd_device": "/dev/nbd3", 00:16:45.476 "bdev_name": "nvme1n1" 00:16:45.476 }, 00:16:45.476 { 00:16:45.476 "nbd_device": "/dev/nbd4", 00:16:45.476 "bdev_name": "nvme2n1" 00:16:45.476 }, 00:16:45.476 { 00:16:45.476 "nbd_device": "/dev/nbd5", 00:16:45.476 "bdev_name": "nvme3n1" 00:16:45.476 } 00:16:45.476 ]' 00:16:45.476 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:45.476 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:45.476 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:45.476 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:45.476 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:45.476 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.476 04:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:45.733 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:45.733 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:45.733 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:45.733 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.733 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.733 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:45.733 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:45.733 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.733 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.733 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:45.991 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:45.991 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:45.991 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:45.991 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.991 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.991 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:45.991 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:45.991 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.991 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.991 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.249 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:46.508 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:46.508 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:46.508 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:46.508 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.508 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.508 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:46.508 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:46.508 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.508 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.508 04:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:46.842 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:46.842 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:46.842 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:46.842 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.842 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.842 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:46.842 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:46.842 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.842 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:46.842 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:46.842 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:47.102 /dev/nbd0 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.102 1+0 records in 00:16:47.102 1+0 records out 00:16:47.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412661 s, 9.9 MB/s 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:47.102 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:47.360 /dev/nbd1 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.360 1+0 records in 00:16:47.360 1+0 records out 00:16:47.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427704 s, 9.6 MB/s 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:47.360 04:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:47.619 /dev/nbd10 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.619 1+0 records in 00:16:47.619 1+0 records out 00:16:47.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527058 s, 7.8 MB/s 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:47.619 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:47.877 /dev/nbd11 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.877 1+0 records in 00:16:47.877 1+0 records out 00:16:47.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435261 s, 9.4 MB/s 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:47.877 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:48.135 /dev/nbd12 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:48.135 1+0 records in 00:16:48.135 1+0 records out 00:16:48.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481345 s, 8.5 MB/s 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:48.135 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:48.392 /dev/nbd13 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:48.392 1+0 records in 00:16:48.392 1+0 records out 00:16:48.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476977 s, 8.6 MB/s 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.392 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:48.393 04:07:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:48.393 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:48.393 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:48.393 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:48.393 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:48.393 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:48.651 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:48.651 { 00:16:48.651 "nbd_device": "/dev/nbd0", 00:16:48.651 "bdev_name": "nvme0n1" 00:16:48.651 }, 00:16:48.651 { 00:16:48.651 "nbd_device": "/dev/nbd1", 00:16:48.651 "bdev_name": "nvme0n2" 00:16:48.651 }, 00:16:48.651 { 00:16:48.651 "nbd_device": "/dev/nbd10", 00:16:48.651 "bdev_name": "nvme0n3" 00:16:48.651 }, 00:16:48.651 { 00:16:48.651 "nbd_device": "/dev/nbd11", 00:16:48.651 "bdev_name": "nvme1n1" 00:16:48.651 }, 00:16:48.651 { 00:16:48.651 "nbd_device": "/dev/nbd12", 00:16:48.651 "bdev_name": "nvme2n1" 00:16:48.651 }, 00:16:48.651 { 00:16:48.651 "nbd_device": "/dev/nbd13", 00:16:48.651 "bdev_name": "nvme3n1" 00:16:48.651 } 00:16:48.651 ]' 00:16:48.651 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:48.651 { 00:16:48.651 "nbd_device": "/dev/nbd0", 00:16:48.651 "bdev_name": "nvme0n1" 00:16:48.651 }, 00:16:48.651 { 00:16:48.651 "nbd_device": "/dev/nbd1", 00:16:48.651 "bdev_name": "nvme0n2" 00:16:48.651 }, 00:16:48.651 { 00:16:48.651 "nbd_device": "/dev/nbd10", 00:16:48.651 "bdev_name": "nvme0n3" 00:16:48.651 }, 00:16:48.651 { 00:16:48.651 "nbd_device": "/dev/nbd11", 00:16:48.651 "bdev_name": "nvme1n1" 00:16:48.651 }, 00:16:48.651 { 00:16:48.651 "nbd_device": "/dev/nbd12", 00:16:48.651 "bdev_name": "nvme2n1" 00:16:48.651 }, 00:16:48.651 { 00:16:48.651 "nbd_device": "/dev/nbd13", 00:16:48.651 "bdev_name": "nvme3n1" 00:16:48.651 } 00:16:48.651 ]' 00:16:48.651 04:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:48.651 /dev/nbd1 00:16:48.651 /dev/nbd10 00:16:48.651 /dev/nbd11 00:16:48.651 /dev/nbd12 00:16:48.651 /dev/nbd13' 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:48.651 /dev/nbd1 00:16:48.651 /dev/nbd10 00:16:48.651 /dev/nbd11 00:16:48.651 /dev/nbd12 00:16:48.651 /dev/nbd13' 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:48.651 256+0 records in 00:16:48.651 256+0 records out 00:16:48.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00753964 s, 139 MB/s 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:48.651 256+0 records in 00:16:48.651 256+0 records out 00:16:48.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0620752 s, 16.9 MB/s 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:48.651 256+0 records in 00:16:48.651 256+0 records out 00:16:48.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0647824 s, 16.2 MB/s 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:48.651 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:48.909 256+0 records in 00:16:48.909 256+0 records out 00:16:48.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0644726 s, 16.3 MB/s 00:16:48.909 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:48.910 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:48.910 256+0 records in 00:16:48.910 256+0 records out 00:16:48.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0649615 s, 16.1 MB/s 00:16:48.910 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:48.910 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:48.910 256+0 records in 00:16:48.910 256+0 records out 00:16:48.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0755025 s, 13.9 MB/s 00:16:48.910 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:48.910 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:49.185 256+0 records in 00:16:49.185 256+0 records out 00:16:49.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0677204 s, 15.5 MB/s 00:16:49.185 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:49.185 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:49.185 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:49.185 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:49.185 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:49.185 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:49.185 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:49.185 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:49.185 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.186 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:49.443 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:49.443 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:49.443 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:49.443 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.443 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.443 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:49.443 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:49.443 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.443 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.443 04:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:49.701 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:49.701 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:49.701 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:49.701 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.701 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.701 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:49.701 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:49.701 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.701 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.701 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:49.960 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:49.960 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:49.960 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:49.960 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.960 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.960 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:49.960 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:49.960 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.960 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.960 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:50.218 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:50.218 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:50.218 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:50.218 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:50.218 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:50.218 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:50.218 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:50.218 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:50.218 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:50.218 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:50.477 04:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:50.477 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:50.734 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:50.734 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:50.734 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:50.734 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:50.734 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:50.734 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:50.734 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:50.734 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:50.734 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:50.734 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.734 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:50.734 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:50.991 malloc_lvol_verify 00:16:50.991 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:50.991 52ced3c2-93e0-4e02-a957-986a4114db25 00:16:50.991 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:51.249 1689486b-b91f-4282-81dc-ed81b657ca72 00:16:51.249 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:51.507 /dev/nbd0 00:16:51.508 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:51.508 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:51.508 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:51.508 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:51.508 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:51.508 mke2fs 1.47.0 (5-Feb-2023) 00:16:51.508 Discarding device blocks: 0/4096 done 00:16:51.508 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:51.508 00:16:51.508 Allocating group tables: 0/1 done 00:16:51.508 Writing inode tables: 0/1 done 00:16:51.508 Creating journal (1024 blocks): done 00:16:51.508 Writing superblocks and filesystem accounting information: 0/1 done 00:16:51.508 00:16:51.508 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:51.508 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:51.508 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:51.508 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:51.508 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:51.508 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.508 04:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72327 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72327 ']' 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72327 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72327 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.767 killing process with pid 72327 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72327' 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72327 00:16:51.767 04:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72327 00:16:52.333 04:07:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:52.333 00:16:52.333 real 0m9.374s 00:16:52.333 user 0m13.458s 00:16:52.333 sys 0m3.044s 00:16:52.333 04:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.333 04:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:52.333 ************************************ 00:16:52.333 END TEST bdev_nbd 00:16:52.333 ************************************ 00:16:52.333 04:07:39 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:52.333 04:07:39 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:16:52.333 04:07:39 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:16:52.333 04:07:39 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:52.333 04:07:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:52.333 04:07:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.333 04:07:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:52.333 ************************************ 00:16:52.333 START TEST bdev_fio 00:16:52.333 ************************************ 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:52.333 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:52.333 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:52.591 ************************************ 00:16:52.591 START TEST bdev_fio_rw_verify 00:16:52.591 ************************************ 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:52.591 04:07:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:52.591 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:52.591 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:52.591 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:52.591 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:52.591 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:52.591 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:52.591 fio-3.35 00:16:52.591 Starting 6 threads 00:17:04.790 00:17:04.790 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72719: Fri Dec 6 04:07:50 2024 00:17:04.790 read: IOPS=39.0k, BW=152MiB/s (160MB/s)(1525MiB/10002msec) 00:17:04.790 slat (usec): min=2, max=3118, avg= 4.83, stdev= 7.17 00:17:04.790 clat (usec): min=66, max=9449, avg=424.93, stdev=348.69 00:17:04.790 lat (usec): min=70, max=9464, avg=429.76, stdev=349.18 00:17:04.790 clat percentiles (usec): 00:17:04.790 | 50.000th=[ 351], 99.000th=[ 2008], 99.900th=[ 3425], 99.990th=[ 5211], 00:17:04.790 | 99.999th=[ 9372] 00:17:04.790 write: IOPS=39.3k, BW=154MiB/s (161MB/s)(1536MiB/10002msec); 0 zone resets 00:17:04.790 slat (usec): min=4, max=3886, avg=24.59, stdev=54.41 00:17:04.790 clat (usec): min=57, max=6148, avg=574.09, stdev=421.34 00:17:04.790 lat (usec): min=71, max=6176, avg=598.68, stdev=429.78 00:17:04.790 clat percentiles (usec): 00:17:04.790 | 50.000th=[ 482], 99.000th=[ 2474], 99.900th=[ 3851], 99.990th=[ 5407], 00:17:04.790 | 99.999th=[ 6063] 00:17:04.790 bw ( KiB/s): min=103241, max=201254, per=100.00%, avg=161669.84, stdev=5249.10, samples=114 00:17:04.790 iops : min=25809, max=50313, avg=40416.84, stdev=1312.33, samples=114 00:17:04.790 lat (usec) : 100=0.16%, 250=19.36%, 500=45.28%, 750=22.11%, 1000=6.69% 00:17:04.790 lat (msec) : 2=4.93%, 4=1.42%, 10=0.06% 00:17:04.790 cpu : usr=46.03%, sys=33.71%, ctx=9770, majf=0, minf=31554 00:17:04.790 IO depths : 1=11.5%, 2=23.8%, 4=51.1%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:04.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.790 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.790 issued rwts: total=390475,393198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.790 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:04.790 00:17:04.790 Run status group 0 (all jobs): 00:17:04.790 READ: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=1525MiB (1599MB), run=10002-10002msec 00:17:04.790 WRITE: bw=154MiB/s (161MB/s), 154MiB/s-154MiB/s (161MB/s-161MB/s), io=1536MiB (1611MB), run=10002-10002msec 00:17:04.790 ----------------------------------------------------- 00:17:04.790 Suppressions used: 00:17:04.790 count bytes template 00:17:04.790 6 48 /usr/src/fio/parse.c 00:17:04.790 2462 236352 /usr/src/fio/iolog.c 00:17:04.790 1 8 libtcmalloc_minimal.so 00:17:04.790 1 904 libcrypto.so 00:17:04.790 ----------------------------------------------------- 00:17:04.790 00:17:04.790 00:17:04.790 real 0m11.859s 00:17:04.790 user 0m29.072s 00:17:04.790 sys 0m20.506s 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:04.790 ************************************ 00:17:04.790 END TEST bdev_fio_rw_verify 00:17:04.790 ************************************ 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:04.790 04:07:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:04.791 04:07:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1cf08e97-9499-4c8e-8b83-16776b5c5fa1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1cf08e97-9499-4c8e-8b83-16776b5c5fa1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "a318db4e-b674-43ec-8529-8000cbf9446c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a318db4e-b674-43ec-8529-8000cbf9446c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "1fb900db-32e4-47e2-b129-76a62ceac15f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1fb900db-32e4-47e2-b129-76a62ceac15f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d26ae0a1-f88a-4246-ad47-f9e72cec15e1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d26ae0a1-f88a-4246-ad47-f9e72cec15e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "1f61c749-ec94-479f-8fde-12280016432e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1f61c749-ec94-479f-8fde-12280016432e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b92a2470-272c-41ef-bb19-d2e487d23cc7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b92a2470-272c-41ef-bb19-d2e487d23cc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:04.791 04:07:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:04.791 04:07:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:04.791 /home/vagrant/spdk_repo/spdk 00:17:04.791 04:07:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:04.791 04:07:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:04.791 04:07:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:04.791 00:17:04.791 real 0m12.016s 00:17:04.791 user 0m29.148s 00:17:04.791 sys 0m20.573s 00:17:04.791 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.791 ************************************ 00:17:04.791 END TEST bdev_fio 00:17:04.791 ************************************ 00:17:04.791 04:07:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:04.791 04:07:51 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:04.791 04:07:51 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:04.791 04:07:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:04.791 04:07:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.791 04:07:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:04.791 ************************************ 00:17:04.791 START TEST bdev_verify 00:17:04.791 ************************************ 00:17:04.791 04:07:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:04.791 [2024-12-06 04:07:51.982755] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:17:04.791 [2024-12-06 04:07:51.982868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72894 ] 00:17:04.791 [2024-12-06 04:07:52.145329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:04.791 [2024-12-06 04:07:52.247558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.791 [2024-12-06 04:07:52.247652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.359 Running I/O for 5 seconds... 00:17:07.707 24192.00 IOPS, 94.50 MiB/s [2024-12-06T04:07:56.175Z] 23904.00 IOPS, 93.38 MiB/s [2024-12-06T04:07:57.141Z] 22837.33 IOPS, 89.21 MiB/s [2024-12-06T04:07:58.085Z] 22616.00 IOPS, 88.34 MiB/s [2024-12-06T04:07:58.085Z] 22400.00 IOPS, 87.50 MiB/s 00:17:10.558 Latency(us) 00:17:10.558 [2024-12-06T04:07:58.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.558 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.558 Verification LBA range: start 0x0 length 0x80000 00:17:10.558 nvme0n1 : 5.02 1709.85 6.68 0.00 0.00 74717.65 8469.27 70577.23 00:17:10.558 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.558 Verification LBA range: start 0x80000 length 0x80000 00:17:10.558 nvme0n1 : 5.02 1706.98 6.67 0.00 0.00 74840.85 9729.58 68560.74 00:17:10.558 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.558 Verification LBA range: start 0x0 length 0x80000 00:17:10.558 nvme0n2 : 5.03 1704.82 6.66 0.00 0.00 74773.57 10889.06 76626.71 00:17:10.558 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.558 Verification LBA range: start 0x80000 length 0x80000 00:17:10.558 nvme0n2 : 5.06 1695.25 6.62 0.00 0.00 75198.67 10637.00 69367.34 00:17:10.558 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.558 Verification LBA range: start 0x0 length 0x80000 00:17:10.558 nvme0n3 : 5.06 1695.28 6.62 0.00 0.00 75035.39 14619.57 69367.34 00:17:10.558 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.558 Verification LBA range: start 0x80000 length 0x80000 00:17:10.558 nvme0n3 : 5.04 1700.79 6.64 0.00 0.00 74788.63 14115.45 62511.26 00:17:10.558 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.558 Verification LBA range: start 0x0 length 0x20000 00:17:10.558 nvme1n1 : 5.06 1693.68 6.62 0.00 0.00 74952.52 11241.94 79449.80 00:17:10.558 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.558 Verification LBA range: start 0x20000 length 0x20000 00:17:10.558 nvme1n1 : 5.09 1711.46 6.69 0.00 0.00 74178.03 9578.34 69770.63 00:17:10.558 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.558 Verification LBA range: start 0x0 length 0xbd0bd 00:17:10.558 nvme2n1 : 5.08 2414.36 9.43 0.00 0.00 52350.57 6553.60 66544.25 00:17:10.558 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.558 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:10.558 nvme2n1 : 5.08 2551.29 9.97 0.00 0.00 49540.69 4083.40 78239.90 00:17:10.558 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.558 Verification LBA range: start 0x0 length 0xa0000 00:17:10.558 nvme3n1 : 5.08 1787.55 6.98 0.00 0.00 70698.22 4335.46 75416.81 00:17:10.559 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.559 Verification LBA range: start 0xa0000 length 0xa0000 00:17:10.559 nvme3n1 : 5.08 1763.21 6.89 0.00 0.00 71675.39 6856.07 75013.51 00:17:10.559 [2024-12-06T04:07:58.086Z] =================================================================================================================== 00:17:10.559 [2024-12-06T04:07:58.086Z] Total : 22134.55 86.46 0.00 0.00 68843.55 4083.40 79449.80 00:17:11.128 00:17:11.128 real 0m6.684s 00:17:11.128 user 0m10.541s 00:17:11.128 sys 0m1.656s 00:17:11.128 04:07:58 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.128 04:07:58 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:11.128 ************************************ 00:17:11.128 END TEST bdev_verify 00:17:11.128 ************************************ 00:17:11.388 04:07:58 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:11.388 04:07:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:11.388 04:07:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.388 04:07:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:11.388 ************************************ 00:17:11.388 START TEST bdev_verify_big_io 00:17:11.388 ************************************ 00:17:11.388 04:07:58 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:11.388 [2024-12-06 04:07:58.761984] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:17:11.388 [2024-12-06 04:07:58.762155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72989 ] 00:17:11.648 [2024-12-06 04:07:58.937160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:11.648 [2024-12-06 04:07:59.079661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.648 [2024-12-06 04:07:59.079782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.220 Running I/O for 5 seconds... 00:17:18.141 1190.00 IOPS, 74.38 MiB/s [2024-12-06T04:08:05.668Z] 3090.00 IOPS, 193.12 MiB/s [2024-12-06T04:08:06.233Z] 3068.00 IOPS, 191.75 MiB/s 00:17:18.706 Latency(us) 00:17:18.706 [2024-12-06T04:08:06.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.706 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:18.706 Verification LBA range: start 0x0 length 0x8000 00:17:18.706 nvme0n1 : 5.91 94.69 5.92 0.00 0.00 1282527.28 196003.05 2413337.99 00:17:18.706 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:18.706 Verification LBA range: start 0x8000 length 0x8000 00:17:18.706 nvme0n1 : 5.81 107.45 6.72 0.00 0.00 1131645.79 29440.79 1161499.57 00:17:18.706 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:18.706 Verification LBA range: start 0x0 length 0x8000 00:17:18.706 nvme0n2 : 5.93 95.68 5.98 0.00 0.00 1201153.69 120182.94 2503676.85 00:17:18.706 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:18.706 Verification LBA range: start 0x8000 length 0x8000 00:17:18.706 nvme0n2 : 5.81 107.40 6.71 0.00 0.00 1103027.27 51017.26 1574477.19 00:17:18.706 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:18.706 Verification LBA range: start 0x0 length 0x8000 00:17:18.706 nvme0n3 : 5.92 135.16 8.45 0.00 0.00 864802.36 5494.94 903388.55 00:17:18.706 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:18.706 Verification LBA range: start 0x8000 length 0x8000 00:17:18.706 nvme0n3 : 5.91 119.10 7.44 0.00 0.00 980899.77 72593.72 1271196.75 00:17:18.706 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:18.706 Verification LBA range: start 0x0 length 0x2000 00:17:18.706 nvme1n1 : 5.93 151.03 9.44 0.00 0.00 757662.44 10435.35 774333.05 00:17:18.706 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:18.706 Verification LBA range: start 0x2000 length 0x2000 00:17:18.706 nvme1n1 : 5.92 147.98 9.25 0.00 0.00 761860.26 59688.17 858219.13 00:17:18.706 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:18.706 Verification LBA range: start 0x0 length 0xbd0b 00:17:18.706 nvme2n1 : 5.94 185.91 11.62 0.00 0.00 596257.98 4587.52 754974.72 00:17:18.706 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:18.706 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:18.706 nvme2n1 : 5.92 162.29 10.14 0.00 0.00 672080.36 13510.50 1742249.35 00:17:18.706 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:18.706 Verification LBA range: start 0x0 length 0xa000 00:17:18.706 nvme3n1 : 6.31 141.98 8.87 0.00 0.00 740007.88 573.44 942105.21 00:17:18.706 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:18.706 Verification LBA range: start 0xa000 length 0xa000 00:17:18.706 nvme3n1 : 6.31 129.35 8.08 0.00 0.00 798711.71 422.20 1471232.79 00:17:18.706 [2024-12-06T04:08:06.233Z] =================================================================================================================== 00:17:18.706 [2024-12-06T04:08:06.233Z] Total : 1578.02 98.63 0.00 0.00 863522.46 422.20 2503676.85 00:17:19.639 00:17:19.639 real 0m8.202s 00:17:19.639 user 0m14.975s 00:17:19.639 sys 0m0.500s 00:17:19.639 04:08:06 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.639 ************************************ 00:17:19.639 END TEST bdev_verify_big_io 00:17:19.639 ************************************ 00:17:19.640 04:08:06 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.640 04:08:06 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:19.640 04:08:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:19.640 04:08:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.640 04:08:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:19.640 ************************************ 00:17:19.640 START TEST bdev_write_zeroes 00:17:19.640 ************************************ 00:17:19.640 04:08:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:19.640 [2024-12-06 04:08:07.008951] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:17:19.640 [2024-12-06 04:08:07.009066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73100 ] 00:17:19.897 [2024-12-06 04:08:07.167343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.897 [2024-12-06 04:08:07.269921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.156 Running I/O for 1 seconds... 00:17:21.531 77568.00 IOPS, 303.00 MiB/s 00:17:21.531 Latency(us) 00:17:21.531 [2024-12-06T04:08:09.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.531 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:21.531 nvme0n1 : 1.02 12418.12 48.51 0.00 0.00 10297.51 5091.64 27625.94 00:17:21.531 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:21.531 nvme0n2 : 1.02 12404.24 48.45 0.00 0.00 10301.63 5016.02 26416.05 00:17:21.531 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:21.531 nvme0n3 : 1.02 12389.79 48.40 0.00 0.00 10303.20 4587.52 25206.15 00:17:21.531 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:21.531 nvme1n1 : 1.02 12375.74 48.34 0.00 0.00 10307.43 4007.78 23996.26 00:17:21.531 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:21.531 nvme2n1 : 1.03 14597.01 57.02 0.00 0.00 8709.79 3806.13 24298.73 00:17:21.531 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:21.531 nvme3n1 : 1.03 12358.60 48.28 0.00 0.00 10261.16 3755.72 21677.29 00:17:21.531 [2024-12-06T04:08:09.058Z] =================================================================================================================== 00:17:21.531 [2024-12-06T04:08:09.058Z] Total : 76543.49 299.00 0.00 0.00 9990.76 3755.72 27625.94 00:17:22.097 00:17:22.097 real 0m2.483s 00:17:22.097 user 0m1.820s 00:17:22.097 sys 0m0.465s 00:17:22.097 04:08:09 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.097 04:08:09 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:22.097 ************************************ 00:17:22.097 END TEST bdev_write_zeroes 00:17:22.097 ************************************ 00:17:22.097 04:08:09 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:22.097 04:08:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:22.097 04:08:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.097 04:08:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:22.097 ************************************ 00:17:22.097 START TEST bdev_json_nonenclosed 00:17:22.097 ************************************ 00:17:22.097 04:08:09 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:22.097 [2024-12-06 04:08:09.558282] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:17:22.097 [2024-12-06 04:08:09.558416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73154 ] 00:17:22.355 [2024-12-06 04:08:09.720146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.355 [2024-12-06 04:08:09.815784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.355 [2024-12-06 04:08:09.815850] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:22.355 [2024-12-06 04:08:09.815866] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:22.355 [2024-12-06 04:08:09.815875] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:22.613 00:17:22.613 real 0m0.494s 00:17:22.613 user 0m0.300s 00:17:22.613 sys 0m0.090s 00:17:22.613 04:08:09 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.613 04:08:09 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:22.613 ************************************ 00:17:22.613 END TEST bdev_json_nonenclosed 00:17:22.613 ************************************ 00:17:22.613 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:22.613 04:08:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:22.613 04:08:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.613 04:08:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:22.613 ************************************ 00:17:22.613 START TEST bdev_json_nonarray 00:17:22.613 ************************************ 00:17:22.613 04:08:10 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:22.613 [2024-12-06 04:08:10.121332] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:17:22.613 [2024-12-06 04:08:10.121442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73174 ] 00:17:22.871 [2024-12-06 04:08:10.278690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.871 [2024-12-06 04:08:10.381556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.871 [2024-12-06 04:08:10.381634] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:22.871 [2024-12-06 04:08:10.381656] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:22.871 [2024-12-06 04:08:10.381670] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:23.129 00:17:23.129 real 0m0.505s 00:17:23.129 user 0m0.308s 00:17:23.129 sys 0m0.091s 00:17:23.129 ************************************ 00:17:23.129 END TEST bdev_json_nonarray 00:17:23.129 ************************************ 00:17:23.129 04:08:10 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.129 04:08:10 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:23.129 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:17:23.129 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:17:23.129 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:17:23.129 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:17:23.129 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:17:23.129 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:23.129 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:23.129 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:23.129 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:23.129 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:23.129 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:23.129 04:08:10 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:23.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:55.782 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:55.782 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.416 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.416 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.416 00:18:02.416 real 1m25.579s 00:18:02.416 user 1m21.233s 00:18:02.416 sys 1m47.961s 00:18:02.416 04:08:49 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.416 04:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:02.416 ************************************ 00:18:02.416 END TEST blockdev_xnvme 00:18:02.416 ************************************ 00:18:02.416 04:08:49 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:02.416 04:08:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:02.416 04:08:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.416 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:18:02.416 ************************************ 00:18:02.416 START TEST ublk 00:18:02.416 ************************************ 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:02.416 * Looking for test storage... 00:18:02.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:02.416 04:08:49 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.416 04:08:49 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.416 04:08:49 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.416 04:08:49 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.416 04:08:49 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.416 04:08:49 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.416 04:08:49 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.416 04:08:49 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.416 04:08:49 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.416 04:08:49 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.416 04:08:49 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.416 04:08:49 ublk -- scripts/common.sh@344 -- # case "$op" in 00:18:02.416 04:08:49 ublk -- scripts/common.sh@345 -- # : 1 00:18:02.416 04:08:49 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.416 04:08:49 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.416 04:08:49 ublk -- scripts/common.sh@365 -- # decimal 1 00:18:02.416 04:08:49 ublk -- scripts/common.sh@353 -- # local d=1 00:18:02.416 04:08:49 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.416 04:08:49 ublk -- scripts/common.sh@355 -- # echo 1 00:18:02.416 04:08:49 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.416 04:08:49 ublk -- scripts/common.sh@366 -- # decimal 2 00:18:02.416 04:08:49 ublk -- scripts/common.sh@353 -- # local d=2 00:18:02.416 04:08:49 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.416 04:08:49 ublk -- scripts/common.sh@355 -- # echo 2 00:18:02.416 04:08:49 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.416 04:08:49 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.416 04:08:49 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.416 04:08:49 ublk -- scripts/common.sh@368 -- # return 0 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:02.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.416 --rc genhtml_branch_coverage=1 00:18:02.416 --rc genhtml_function_coverage=1 00:18:02.416 --rc genhtml_legend=1 00:18:02.416 --rc geninfo_all_blocks=1 00:18:02.416 --rc geninfo_unexecuted_blocks=1 00:18:02.416 00:18:02.416 ' 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:02.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.416 --rc genhtml_branch_coverage=1 00:18:02.416 --rc genhtml_function_coverage=1 00:18:02.416 --rc genhtml_legend=1 00:18:02.416 --rc geninfo_all_blocks=1 00:18:02.416 --rc geninfo_unexecuted_blocks=1 00:18:02.416 00:18:02.416 ' 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:02.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.416 --rc genhtml_branch_coverage=1 00:18:02.416 --rc genhtml_function_coverage=1 00:18:02.416 --rc genhtml_legend=1 00:18:02.416 --rc geninfo_all_blocks=1 00:18:02.416 --rc geninfo_unexecuted_blocks=1 00:18:02.416 00:18:02.416 ' 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:02.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.416 --rc genhtml_branch_coverage=1 00:18:02.416 --rc genhtml_function_coverage=1 00:18:02.416 --rc genhtml_legend=1 00:18:02.416 --rc geninfo_all_blocks=1 00:18:02.416 --rc geninfo_unexecuted_blocks=1 00:18:02.416 00:18:02.416 ' 00:18:02.416 04:08:49 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:02.416 04:08:49 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:02.416 04:08:49 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:02.416 04:08:49 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:02.416 04:08:49 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:02.416 04:08:49 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:02.416 04:08:49 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:02.416 04:08:49 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:02.416 04:08:49 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:02.416 04:08:49 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:02.416 04:08:49 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:02.416 04:08:49 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:02.416 04:08:49 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:02.416 04:08:49 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:02.416 04:08:49 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:02.416 04:08:49 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:02.416 04:08:49 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:02.416 04:08:49 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:02.416 04:08:49 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:02.416 04:08:49 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.416 04:08:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:02.416 ************************************ 00:18:02.416 START TEST test_save_ublk_config 00:18:02.416 ************************************ 00:18:02.416 04:08:49 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:18:02.416 04:08:49 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:02.416 04:08:49 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73509 00:18:02.416 04:08:49 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:02.416 04:08:49 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73509 00:18:02.416 04:08:49 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:02.416 04:08:49 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73509 ']' 00:18:02.417 04:08:49 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.417 04:08:49 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.417 04:08:49 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.417 04:08:49 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.417 04:08:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:02.417 [2024-12-06 04:08:49.298581] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:18:02.417 [2024-12-06 04:08:49.298703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73509 ] 00:18:02.417 [2024-12-06 04:08:49.455380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.417 [2024-12-06 04:08:49.556486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.677 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.677 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:02.677 04:08:50 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:02.677 04:08:50 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:02.677 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.677 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:02.677 [2024-12-06 04:08:50.154737] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:02.677 [2024-12-06 04:08:50.155413] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:02.677 malloc0 00:18:02.939 [2024-12-06 04:08:50.209845] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:02.939 [2024-12-06 04:08:50.209918] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:02.939 [2024-12-06 04:08:50.209926] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:02.939 [2024-12-06 04:08:50.209932] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:02.939 [2024-12-06 04:08:50.218798] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:02.939 [2024-12-06 04:08:50.218823] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:02.939 [2024-12-06 04:08:50.225742] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:02.939 [2024-12-06 04:08:50.225838] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:02.939 [2024-12-06 04:08:50.242736] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:02.939 0 00:18:02.939 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.939 04:08:50 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:02.939 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.939 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:03.202 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.202 04:08:50 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:03.202 "subsystems": [ 00:18:03.202 { 00:18:03.202 "subsystem": "fsdev", 00:18:03.202 "config": [ 00:18:03.202 { 00:18:03.202 "method": "fsdev_set_opts", 00:18:03.202 "params": { 00:18:03.202 "fsdev_io_pool_size": 65535, 00:18:03.202 "fsdev_io_cache_size": 256 00:18:03.202 } 00:18:03.202 } 00:18:03.202 ] 00:18:03.202 }, 00:18:03.202 { 00:18:03.202 "subsystem": "keyring", 00:18:03.202 "config": [] 00:18:03.202 }, 00:18:03.202 { 00:18:03.202 "subsystem": "iobuf", 00:18:03.202 "config": [ 00:18:03.202 { 00:18:03.202 "method": "iobuf_set_options", 00:18:03.202 "params": { 00:18:03.202 "small_pool_count": 8192, 00:18:03.202 "large_pool_count": 1024, 00:18:03.202 "small_bufsize": 8192, 00:18:03.202 "large_bufsize": 135168, 00:18:03.202 "enable_numa": false 00:18:03.202 } 00:18:03.202 } 00:18:03.202 ] 00:18:03.202 }, 00:18:03.202 { 00:18:03.202 "subsystem": "sock", 00:18:03.202 "config": [ 00:18:03.202 { 00:18:03.202 "method": "sock_set_default_impl", 00:18:03.202 "params": { 00:18:03.202 "impl_name": "posix" 00:18:03.202 } 00:18:03.202 }, 00:18:03.202 { 00:18:03.202 "method": "sock_impl_set_options", 00:18:03.202 "params": { 00:18:03.202 "impl_name": "ssl", 00:18:03.202 "recv_buf_size": 4096, 00:18:03.202 "send_buf_size": 4096, 00:18:03.202 "enable_recv_pipe": true, 00:18:03.202 "enable_quickack": false, 00:18:03.202 "enable_placement_id": 0, 00:18:03.202 "enable_zerocopy_send_server": true, 00:18:03.202 "enable_zerocopy_send_client": false, 00:18:03.202 "zerocopy_threshold": 0, 00:18:03.202 "tls_version": 0, 00:18:03.202 "enable_ktls": false 00:18:03.202 } 00:18:03.202 }, 00:18:03.202 { 00:18:03.202 "method": "sock_impl_set_options", 00:18:03.202 "params": { 00:18:03.202 "impl_name": "posix", 00:18:03.202 "recv_buf_size": 2097152, 00:18:03.202 "send_buf_size": 2097152, 00:18:03.202 "enable_recv_pipe": true, 00:18:03.202 "enable_quickack": false, 00:18:03.202 "enable_placement_id": 0, 00:18:03.202 "enable_zerocopy_send_server": true, 00:18:03.202 "enable_zerocopy_send_client": false, 00:18:03.202 "zerocopy_threshold": 0, 00:18:03.202 "tls_version": 0, 00:18:03.202 "enable_ktls": false 00:18:03.202 } 00:18:03.202 } 00:18:03.202 ] 00:18:03.202 }, 00:18:03.202 { 00:18:03.202 "subsystem": "vmd", 00:18:03.202 "config": [] 00:18:03.202 }, 00:18:03.202 { 00:18:03.202 "subsystem": "accel", 00:18:03.202 "config": [ 00:18:03.202 { 00:18:03.202 "method": "accel_set_options", 00:18:03.202 "params": { 00:18:03.202 "small_cache_size": 128, 00:18:03.202 "large_cache_size": 16, 00:18:03.202 "task_count": 2048, 00:18:03.202 "sequence_count": 2048, 00:18:03.202 "buf_count": 2048 00:18:03.202 } 00:18:03.202 } 00:18:03.202 ] 00:18:03.202 }, 00:18:03.202 { 00:18:03.202 "subsystem": "bdev", 00:18:03.202 "config": [ 00:18:03.202 { 00:18:03.202 "method": "bdev_set_options", 00:18:03.202 "params": { 00:18:03.202 "bdev_io_pool_size": 65535, 00:18:03.202 "bdev_io_cache_size": 256, 00:18:03.202 "bdev_auto_examine": true, 00:18:03.202 "iobuf_small_cache_size": 128, 00:18:03.202 "iobuf_large_cache_size": 16 00:18:03.202 } 00:18:03.202 }, 00:18:03.202 { 00:18:03.202 "method": "bdev_raid_set_options", 00:18:03.202 "params": { 00:18:03.202 "process_window_size_kb": 1024, 00:18:03.202 "process_max_bandwidth_mb_sec": 0 00:18:03.202 } 00:18:03.202 }, 00:18:03.202 { 00:18:03.202 "method": "bdev_iscsi_set_options", 00:18:03.202 "params": { 00:18:03.202 "timeout_sec": 30 00:18:03.202 } 00:18:03.202 }, 00:18:03.202 { 00:18:03.202 "method": "bdev_nvme_set_options", 00:18:03.202 "params": { 00:18:03.202 "action_on_timeout": "none", 00:18:03.202 "timeout_us": 0, 00:18:03.202 "timeout_admin_us": 0, 00:18:03.202 "keep_alive_timeout_ms": 10000, 00:18:03.202 "arbitration_burst": 0, 00:18:03.202 "low_priority_weight": 0, 00:18:03.202 "medium_priority_weight": 0, 00:18:03.202 "high_priority_weight": 0, 00:18:03.202 "nvme_adminq_poll_period_us": 10000, 00:18:03.202 "nvme_ioq_poll_period_us": 0, 00:18:03.202 "io_queue_requests": 0, 00:18:03.202 "delay_cmd_submit": true, 00:18:03.203 "transport_retry_count": 4, 00:18:03.203 "bdev_retry_count": 3, 00:18:03.203 "transport_ack_timeout": 0, 00:18:03.203 "ctrlr_loss_timeout_sec": 0, 00:18:03.203 "reconnect_delay_sec": 0, 00:18:03.203 "fast_io_fail_timeout_sec": 0, 00:18:03.203 "disable_auto_failback": false, 00:18:03.203 "generate_uuids": false, 00:18:03.203 "transport_tos": 0, 00:18:03.203 "nvme_error_stat": false, 00:18:03.203 "rdma_srq_size": 0, 00:18:03.203 "io_path_stat": false, 00:18:03.203 "allow_accel_sequence": false, 00:18:03.203 "rdma_max_cq_size": 0, 00:18:03.203 "rdma_cm_event_timeout_ms": 0, 00:18:03.203 "dhchap_digests": [ 00:18:03.203 "sha256", 00:18:03.203 "sha384", 00:18:03.203 "sha512" 00:18:03.203 ], 00:18:03.203 "dhchap_dhgroups": [ 00:18:03.203 "null", 00:18:03.203 "ffdhe2048", 00:18:03.203 "ffdhe3072", 00:18:03.203 "ffdhe4096", 00:18:03.203 "ffdhe6144", 00:18:03.203 "ffdhe8192" 00:18:03.203 ] 00:18:03.203 } 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "method": "bdev_nvme_set_hotplug", 00:18:03.203 "params": { 00:18:03.203 "period_us": 100000, 00:18:03.203 "enable": false 00:18:03.203 } 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "method": "bdev_malloc_create", 00:18:03.203 "params": { 00:18:03.203 "name": "malloc0", 00:18:03.203 "num_blocks": 8192, 00:18:03.203 "block_size": 4096, 00:18:03.203 "physical_block_size": 4096, 00:18:03.203 "uuid": "23283275-7f64-4bbd-a083-c76597bf6b6e", 00:18:03.203 "optimal_io_boundary": 0, 00:18:03.203 "md_size": 0, 00:18:03.203 "dif_type": 0, 00:18:03.203 "dif_is_head_of_md": false, 00:18:03.203 "dif_pi_format": 0 00:18:03.203 } 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "method": "bdev_wait_for_examine" 00:18:03.203 } 00:18:03.203 ] 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "subsystem": "scsi", 00:18:03.203 "config": null 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "subsystem": "scheduler", 00:18:03.203 "config": [ 00:18:03.203 { 00:18:03.203 "method": "framework_set_scheduler", 00:18:03.203 "params": { 00:18:03.203 "name": "static" 00:18:03.203 } 00:18:03.203 } 00:18:03.203 ] 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "subsystem": "vhost_scsi", 00:18:03.203 "config": [] 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "subsystem": "vhost_blk", 00:18:03.203 "config": [] 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "subsystem": "ublk", 00:18:03.203 "config": [ 00:18:03.203 { 00:18:03.203 "method": "ublk_create_target", 00:18:03.203 "params": { 00:18:03.203 "cpumask": "1" 00:18:03.203 } 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "method": "ublk_start_disk", 00:18:03.203 "params": { 00:18:03.203 "bdev_name": "malloc0", 00:18:03.203 "ublk_id": 0, 00:18:03.203 "num_queues": 1, 00:18:03.203 "queue_depth": 128 00:18:03.203 } 00:18:03.203 } 00:18:03.203 ] 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "subsystem": "nbd", 00:18:03.203 "config": [] 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "subsystem": "nvmf", 00:18:03.203 "config": [ 00:18:03.203 { 00:18:03.203 "method": "nvmf_set_config", 00:18:03.203 "params": { 00:18:03.203 "discovery_filter": "match_any", 00:18:03.203 "admin_cmd_passthru": { 00:18:03.203 "identify_ctrlr": false 00:18:03.203 }, 00:18:03.203 "dhchap_digests": [ 00:18:03.203 "sha256", 00:18:03.203 "sha384", 00:18:03.203 "sha512" 00:18:03.203 ], 00:18:03.203 "dhchap_dhgroups": [ 00:18:03.203 "null", 00:18:03.203 "ffdhe2048", 00:18:03.203 "ffdhe3072", 00:18:03.203 "ffdhe4096", 00:18:03.203 "ffdhe6144", 00:18:03.203 "ffdhe8192" 00:18:03.203 ] 00:18:03.203 } 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "method": "nvmf_set_max_subsystems", 00:18:03.203 "params": { 00:18:03.203 "max_subsystems": 1024 00:18:03.203 } 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "method": "nvmf_set_crdt", 00:18:03.203 "params": { 00:18:03.203 "crdt1": 0, 00:18:03.203 "crdt2": 0, 00:18:03.203 "crdt3": 0 00:18:03.203 } 00:18:03.203 } 00:18:03.203 ] 00:18:03.203 }, 00:18:03.203 { 00:18:03.203 "subsystem": "iscsi", 00:18:03.203 "config": [ 00:18:03.203 { 00:18:03.203 "method": "iscsi_set_options", 00:18:03.203 "params": { 00:18:03.203 "node_base": "iqn.2016-06.io.spdk", 00:18:03.203 "max_sessions": 128, 00:18:03.203 "max_connections_per_session": 2, 00:18:03.203 "max_queue_depth": 64, 00:18:03.203 "default_time2wait": 2, 00:18:03.203 "default_time2retain": 20, 00:18:03.203 "first_burst_length": 8192, 00:18:03.203 "immediate_data": true, 00:18:03.203 "allow_duplicated_isid": false, 00:18:03.203 "error_recovery_level": 0, 00:18:03.203 "nop_timeout": 60, 00:18:03.203 "nop_in_interval": 30, 00:18:03.203 "disable_chap": false, 00:18:03.203 "require_chap": false, 00:18:03.203 "mutual_chap": false, 00:18:03.203 "chap_group": 0, 00:18:03.203 "max_large_datain_per_connection": 64, 00:18:03.203 "max_r2t_per_connection": 4, 00:18:03.203 "pdu_pool_size": 36864, 00:18:03.203 "immediate_data_pool_size": 16384, 00:18:03.203 "data_out_pool_size": 2048 00:18:03.203 } 00:18:03.203 } 00:18:03.203 ] 00:18:03.203 } 00:18:03.203 ] 00:18:03.203 }' 00:18:03.203 04:08:50 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73509 00:18:03.203 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73509 ']' 00:18:03.203 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73509 00:18:03.203 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:03.203 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.203 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73509 00:18:03.203 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.203 killing process with pid 73509 00:18:03.203 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.203 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73509' 00:18:03.203 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73509 00:18:03.203 04:08:50 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73509 00:18:04.151 [2024-12-06 04:08:51.660292] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:04.412 [2024-12-06 04:08:51.689816] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:04.412 [2024-12-06 04:08:51.689936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:04.412 [2024-12-06 04:08:51.696752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:04.412 [2024-12-06 04:08:51.696801] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:04.412 [2024-12-06 04:08:51.696811] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:04.412 [2024-12-06 04:08:51.696835] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:04.412 [2024-12-06 04:08:51.696959] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:05.801 04:08:52 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:05.801 04:08:52 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:05.801 "subsystems": [ 00:18:05.801 { 00:18:05.801 "subsystem": "fsdev", 00:18:05.801 "config": [ 00:18:05.801 { 00:18:05.801 "method": "fsdev_set_opts", 00:18:05.801 "params": { 00:18:05.801 "fsdev_io_pool_size": 65535, 00:18:05.801 "fsdev_io_cache_size": 256 00:18:05.801 } 00:18:05.801 } 00:18:05.801 ] 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "subsystem": "keyring", 00:18:05.801 "config": [] 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "subsystem": "iobuf", 00:18:05.801 "config": [ 00:18:05.801 { 00:18:05.801 "method": "iobuf_set_options", 00:18:05.801 "params": { 00:18:05.801 "small_pool_count": 8192, 00:18:05.801 "large_pool_count": 1024, 00:18:05.801 "small_bufsize": 8192, 00:18:05.801 "large_bufsize": 135168, 00:18:05.801 "enable_numa": false 00:18:05.801 } 00:18:05.801 } 00:18:05.801 ] 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "subsystem": "sock", 00:18:05.801 "config": [ 00:18:05.801 { 00:18:05.801 "method": "sock_set_default_impl", 00:18:05.801 "params": { 00:18:05.801 "impl_name": "posix" 00:18:05.801 } 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "method": "sock_impl_set_options", 00:18:05.801 "params": { 00:18:05.801 "impl_name": "ssl", 00:18:05.801 "recv_buf_size": 4096, 00:18:05.801 "send_buf_size": 4096, 00:18:05.801 "enable_recv_pipe": true, 00:18:05.801 "enable_quickack": false, 00:18:05.801 "enable_placement_id": 0, 00:18:05.801 "enable_zerocopy_send_server": true, 00:18:05.801 "enable_zerocopy_send_client": false, 00:18:05.801 "zerocopy_threshold": 0, 00:18:05.801 "tls_version": 0, 00:18:05.801 "enable_ktls": false 00:18:05.801 } 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "method": "sock_impl_set_options", 00:18:05.801 "params": { 00:18:05.801 "impl_name": "posix", 00:18:05.801 "recv_buf_size": 2097152, 00:18:05.801 "send_buf_size": 2097152, 00:18:05.801 "enable_recv_pipe": true, 00:18:05.801 "enable_quickack": false, 00:18:05.801 "enable_placement_id": 0, 00:18:05.801 "enable_zerocopy_send_server": true, 00:18:05.801 "enable_zerocopy_send_client": false, 00:18:05.801 "zerocopy_threshold": 0, 00:18:05.801 "tls_version": 0, 00:18:05.801 "enable_ktls": false 00:18:05.801 } 00:18:05.801 } 00:18:05.801 ] 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "subsystem": "vmd", 00:18:05.801 "config": [] 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "subsystem": "accel", 00:18:05.801 "config": [ 00:18:05.801 { 00:18:05.801 "method": "accel_set_options", 00:18:05.801 "params": { 00:18:05.801 "small_cache_size": 128, 00:18:05.801 "large_cache_size": 16, 00:18:05.801 "task_count": 2048, 00:18:05.801 "sequence_count": 2048, 00:18:05.801 "buf_count": 2048 00:18:05.801 } 00:18:05.801 } 00:18:05.801 ] 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "subsystem": "bdev", 00:18:05.801 "config": [ 00:18:05.801 { 00:18:05.801 "method": "bdev_set_options", 00:18:05.801 "params": { 00:18:05.801 "bdev_io_pool_size": 65535, 00:18:05.801 "bdev_io_cache_size": 256, 00:18:05.801 "bdev_auto_examine": true, 00:18:05.801 "iobuf_small_cache_size": 128, 00:18:05.801 "iobuf_large_cache_size": 16 00:18:05.801 } 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "method": "bdev_raid_set_options", 00:18:05.801 "params": { 00:18:05.801 "process_window_size_kb": 1024, 00:18:05.801 "process_max_bandwidth_mb_sec": 0 00:18:05.801 } 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "method": "bdev_iscsi_set_options", 00:18:05.801 "params": { 00:18:05.801 "timeout_sec": 30 00:18:05.801 } 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "method": "bdev_nvme_set_options", 00:18:05.801 "params": { 00:18:05.801 "action_on_timeout": "none", 00:18:05.801 "timeout_us": 0, 00:18:05.801 "timeout_admin_us": 0, 00:18:05.801 "keep_alive_timeout_ms": 10000, 00:18:05.801 "arbitration_burst": 0, 00:18:05.801 "low_priority_weight": 0, 00:18:05.801 "medium_priority_weight": 0, 00:18:05.801 "high_priority_weight": 0, 00:18:05.801 "nvme_adminq_poll_period_us": 10000, 00:18:05.801 "nvme_ioq_poll_period_us": 0, 00:18:05.801 "io_queue_requests": 0, 00:18:05.801 "delay_cmd_submit": true, 00:18:05.801 "transport_retry_count": 4, 00:18:05.801 "bdev_retry_count": 3, 00:18:05.801 "transport_ack_timeout": 0, 00:18:05.801 "ctrlr_loss_timeout_sec": 0, 00:18:05.801 "reconnect_delay_sec": 0, 00:18:05.801 "fast_io_fail_timeout_sec": 0, 00:18:05.801 "disable_auto_failback": false, 00:18:05.801 "generate_uuids": false, 00:18:05.801 "transport_tos": 0, 00:18:05.801 "nvme_error_stat": false, 00:18:05.801 "rdma_srq_size": 0, 00:18:05.801 "io_path_stat": false, 00:18:05.801 "allow_accel_sequence": false, 00:18:05.801 "rdma_max_cq_size": 0, 00:18:05.801 "rdma_cm_event_timeout_ms": 0, 00:18:05.801 "dhchap_digests": [ 00:18:05.801 "sha256", 00:18:05.801 "sha384", 00:18:05.801 "sha512" 00:18:05.801 ], 00:18:05.801 "dhchap_dhgroups": [ 00:18:05.801 "null", 00:18:05.801 "ffdhe2048", 00:18:05.801 "ffdhe3072", 00:18:05.801 "ffdhe4096", 00:18:05.801 "ffdhe6144", 00:18:05.801 "ffdhe8192" 00:18:05.801 ] 00:18:05.801 } 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "method": "bdev_nvme_set_hotplug", 00:18:05.801 "params": { 00:18:05.801 "period_us": 100000, 00:18:05.801 "enable": false 00:18:05.801 } 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "method": "bdev_malloc_create", 00:18:05.801 "params": { 00:18:05.801 "name": "malloc0", 00:18:05.801 "num_blocks": 8192, 00:18:05.801 "block_size": 4096, 00:18:05.801 "physical_block_size": 4096, 00:18:05.801 "uuid": "23283275-7f64-4bbd-a083-c76597bf6b6e", 00:18:05.801 "optimal_io_boundary": 0, 00:18:05.801 "md_size": 0, 00:18:05.801 "dif_type": 0, 00:18:05.801 "dif_is_head_of_md": false, 00:18:05.801 "dif_pi_format": 0 00:18:05.801 } 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "method": "bdev_wait_for_examine" 00:18:05.801 } 00:18:05.801 ] 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "subsystem": "scsi", 00:18:05.801 "config": null 00:18:05.801 }, 00:18:05.801 { 00:18:05.801 "subsystem": "scheduler", 00:18:05.801 "config": [ 00:18:05.801 { 00:18:05.801 "method": "framework_set_scheduler", 00:18:05.801 "params": { 00:18:05.801 "name": "static" 00:18:05.802 } 00:18:05.802 } 00:18:05.802 ] 00:18:05.802 }, 00:18:05.802 { 00:18:05.802 "subsystem": "vhost_scsi", 00:18:05.802 "config": [] 00:18:05.802 }, 00:18:05.802 { 00:18:05.802 "subsystem": "vhost_blk", 00:18:05.802 "config": [] 00:18:05.802 }, 00:18:05.802 { 00:18:05.802 "subsystem": "ublk", 00:18:05.802 "config": [ 00:18:05.802 { 00:18:05.802 "method": "ublk_create_target", 00:18:05.802 "params": { 00:18:05.802 "cpumask": "1" 00:18:05.802 } 00:18:05.802 }, 00:18:05.802 { 00:18:05.802 "method": "ublk_start_disk", 00:18:05.802 "params": { 00:18:05.802 "bdev_name": "malloc0", 00:18:05.802 "ublk_id": 0, 00:18:05.802 "num_queues": 1, 00:18:05.802 "queue_depth": 128 00:18:05.802 } 00:18:05.802 } 00:18:05.802 ] 00:18:05.802 }, 00:18:05.802 { 00:18:05.802 "subsystem": "nbd", 00:18:05.802 "config": [] 00:18:05.802 }, 00:18:05.802 { 00:18:05.802 "subsystem": "nvmf", 00:18:05.802 "config": [ 00:18:05.802 { 00:18:05.802 "method": "nvmf_set_config", 00:18:05.802 "params": { 00:18:05.802 "discovery_filter": "match_any", 00:18:05.802 "admin_cmd_passthru": { 00:18:05.802 "identify_ctrlr": false 00:18:05.802 }, 00:18:05.802 "dhchap_digests": [ 00:18:05.802 "sha256", 00:18:05.802 "sha384", 00:18:05.802 "sha512" 00:18:05.802 ], 00:18:05.802 "dhchap_dhgroups": [ 00:18:05.802 "null", 00:18:05.802 "ffdhe2048", 00:18:05.802 "ffdhe3072", 00:18:05.802 "ffdhe4096", 00:18:05.802 "ffdhe6144", 00:18:05.802 "ffdhe8192" 00:18:05.802 ] 00:18:05.802 } 00:18:05.802 }, 00:18:05.802 { 00:18:05.802 "method": "nvmf_set_max_subsystems", 00:18:05.802 "params": { 00:18:05.802 "max_subsystems": 1024 00:18:05.802 } 00:18:05.802 }, 00:18:05.802 { 00:18:05.802 "method": "nvmf_set_crdt", 00:18:05.802 "params": { 00:18:05.802 "crdt1": 0, 00:18:05.802 "crdt2": 0, 00:18:05.802 "crdt3": 0 00:18:05.802 } 00:18:05.802 } 00:18:05.802 ] 00:18:05.802 }, 00:18:05.802 { 00:18:05.802 "subsystem": "iscsi", 00:18:05.802 "config": [ 00:18:05.802 { 00:18:05.802 "method": "iscsi_set_options", 00:18:05.802 "params": { 00:18:05.802 "node_base": "iqn.2016-06.io.spdk", 00:18:05.802 "max_sessions": 128, 00:18:05.802 "max_connections_per_session": 2, 00:18:05.802 "max_queue_depth": 64, 00:18:05.802 "default_time2wait": 2, 00:18:05.802 "default_time2retain": 20, 00:18:05.802 "first_burst_length": 8192, 00:18:05.802 "immediate_data": true, 00:18:05.802 "allow_duplicated_isid": false, 00:18:05.802 "error_recovery_level": 0, 00:18:05.802 "nop_timeout": 60, 00:18:05.802 "nop_in_interval": 30, 00:18:05.802 "disable_chap": false, 00:18:05.802 "require_chap": false, 00:18:05.802 "mutual_chap": false, 00:18:05.802 "chap_group": 0, 00:18:05.802 "max_large_datain_per_connection": 64, 00:18:05.802 "max_r2t_per_connection": 4, 00:18:05.802 "pdu_pool_size": 36864, 00:18:05.802 "immediate_data_pool_size": 16384, 00:18:05.802 "data_out_pool_size": 2048 00:18:05.802 } 00:18:05.802 } 00:18:05.802 ] 00:18:05.802 } 00:18:05.802 ] 00:18:05.802 }' 00:18:05.802 04:08:52 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73558 00:18:05.802 04:08:52 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73558 00:18:05.802 04:08:52 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73558 ']' 00:18:05.802 04:08:52 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.802 04:08:52 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.802 04:08:52 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.802 04:08:52 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.802 04:08:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:05.802 [2024-12-06 04:08:52.994943] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:18:05.802 [2024-12-06 04:08:52.995077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73558 ] 00:18:05.802 [2024-12-06 04:08:53.152793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.802 [2024-12-06 04:08:53.238384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.744 [2024-12-06 04:08:53.914736] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:06.744 [2024-12-06 04:08:53.915418] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:06.744 [2024-12-06 04:08:53.922839] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:06.744 [2024-12-06 04:08:53.922917] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:06.744 [2024-12-06 04:08:53.922926] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:06.744 [2024-12-06 04:08:53.922932] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:06.744 [2024-12-06 04:08:53.931799] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:06.744 [2024-12-06 04:08:53.931823] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:06.744 [2024-12-06 04:08:53.938741] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:06.744 [2024-12-06 04:08:53.938843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:06.744 [2024-12-06 04:08:53.955729] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:06.744 04:08:53 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.744 04:08:53 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:06.744 04:08:53 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:06.744 04:08:53 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.744 04:08:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:06.744 04:08:53 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73558 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73558 ']' 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73558 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73558 00:18:06.744 killing process with pid 73558 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73558' 00:18:06.744 04:08:54 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73558 00:18:06.745 04:08:54 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73558 00:18:07.685 [2024-12-06 04:08:55.060529] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:07.685 [2024-12-06 04:08:55.096751] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:07.685 [2024-12-06 04:08:55.096894] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:07.685 [2024-12-06 04:08:55.106761] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:07.685 [2024-12-06 04:08:55.106815] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:07.685 [2024-12-06 04:08:55.106821] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:07.685 [2024-12-06 04:08:55.106844] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:07.685 [2024-12-06 04:08:55.106975] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:09.070 04:08:56 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:09.070 00:18:09.070 real 0m7.253s 00:18:09.070 user 0m4.691s 00:18:09.070 sys 0m3.139s 00:18:09.070 04:08:56 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.070 04:08:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:09.070 ************************************ 00:18:09.070 END TEST test_save_ublk_config 00:18:09.070 ************************************ 00:18:09.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.070 04:08:56 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73631 00:18:09.070 04:08:56 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.070 04:08:56 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73631 00:18:09.070 04:08:56 ublk -- common/autotest_common.sh@835 -- # '[' -z 73631 ']' 00:18:09.070 04:08:56 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:09.070 04:08:56 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.070 04:08:56 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.070 04:08:56 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.070 04:08:56 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.070 04:08:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.070 [2024-12-06 04:08:56.583566] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:18:09.070 [2024-12-06 04:08:56.583696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73631 ] 00:18:09.329 [2024-12-06 04:08:56.738069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:09.329 [2024-12-06 04:08:56.822255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.329 [2024-12-06 04:08:56.822356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.898 04:08:57 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.898 04:08:57 ublk -- common/autotest_common.sh@868 -- # return 0 00:18:09.898 04:08:57 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:09.898 04:08:57 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:09.898 04:08:57 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.898 04:08:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:09.898 ************************************ 00:18:09.898 START TEST test_create_ublk 00:18:09.898 ************************************ 00:18:09.898 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:18:09.898 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:09.898 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.898 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:10.156 [2024-12-06 04:08:57.428736] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:10.156 [2024-12-06 04:08:57.430329] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:10.156 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.156 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:10.156 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:10.156 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.156 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:10.156 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.156 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:10.156 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:10.156 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.156 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:10.156 [2024-12-06 04:08:57.593860] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:10.156 [2024-12-06 04:08:57.594167] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:10.156 [2024-12-06 04:08:57.594182] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:10.156 [2024-12-06 04:08:57.594188] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:10.156 [2024-12-06 04:08:57.601761] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:10.156 [2024-12-06 04:08:57.601783] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:10.156 [2024-12-06 04:08:57.609747] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:10.156 [2024-12-06 04:08:57.610268] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:10.156 [2024-12-06 04:08:57.630761] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:10.156 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.156 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:10.156 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:10.156 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:10.156 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.156 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:10.156 04:08:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.156 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:10.156 { 00:18:10.156 "ublk_device": "/dev/ublkb0", 00:18:10.156 "id": 0, 00:18:10.156 "queue_depth": 512, 00:18:10.156 "num_queues": 4, 00:18:10.156 "bdev_name": "Malloc0" 00:18:10.156 } 00:18:10.156 ]' 00:18:10.156 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:10.414 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:10.414 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:10.414 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:10.414 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:10.414 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:10.414 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:10.414 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:10.414 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:10.414 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:10.414 04:08:57 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:10.414 04:08:57 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:10.414 04:08:57 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:10.414 04:08:57 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:10.414 04:08:57 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:10.414 04:08:57 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:10.414 04:08:57 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:10.414 04:08:57 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:10.414 04:08:57 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:10.414 04:08:57 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:10.414 04:08:57 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:10.414 04:08:57 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:10.414 fio: verification read phase will never start because write phase uses all of runtime 00:18:10.414 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:10.414 fio-3.35 00:18:10.414 Starting 1 process 00:18:22.661 00:18:22.661 fio_test: (groupid=0, jobs=1): err= 0: pid=73675: Fri Dec 6 04:09:08 2024 00:18:22.661 write: IOPS=18.1k, BW=70.7MiB/s (74.1MB/s)(707MiB/10001msec); 0 zone resets 00:18:22.661 clat (usec): min=36, max=8027, avg=54.47, stdev=118.18 00:18:22.661 lat (usec): min=36, max=8028, avg=54.93, stdev=118.19 00:18:22.661 clat percentiles (usec): 00:18:22.661 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 45], 00:18:22.661 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 50], 60.00th=[ 51], 00:18:22.661 | 70.00th=[ 52], 80.00th=[ 53], 90.00th=[ 57], 95.00th=[ 61], 00:18:22.661 | 99.00th=[ 71], 99.50th=[ 78], 99.90th=[ 2311], 99.95th=[ 3294], 00:18:22.661 | 99.99th=[ 3982] 00:18:22.661 bw ( KiB/s): min=33232, max=78528, per=99.89%, avg=72268.21, stdev=9883.03, samples=19 00:18:22.661 iops : min= 8308, max=19632, avg=18067.16, stdev=2470.81, samples=19 00:18:22.661 lat (usec) : 50=55.73%, 100=43.94%, 250=0.09%, 500=0.06%, 750=0.01% 00:18:22.661 lat (usec) : 1000=0.01% 00:18:22.661 lat (msec) : 2=0.05%, 4=0.11%, 10=0.01% 00:18:22.661 cpu : usr=3.03%, sys=13.59%, ctx=180925, majf=0, minf=797 00:18:22.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:22.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.661 issued rwts: total=0,180886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:22.661 00:18:22.661 Run status group 0 (all jobs): 00:18:22.661 WRITE: bw=70.7MiB/s (74.1MB/s), 70.7MiB/s-70.7MiB/s (74.1MB/s-74.1MB/s), io=707MiB (741MB), run=10001-10001msec 00:18:22.661 00:18:22.661 Disk stats (read/write): 00:18:22.661 ublkb0: ios=0/179061, merge=0/0, ticks=0/8328, in_queue=8329, util=99.10% 00:18:22.661 04:09:08 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:22.661 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.661 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 [2024-12-06 04:09:08.053373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:22.662 [2024-12-06 04:09:08.084210] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:22.662 [2024-12-06 04:09:08.085083] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:22.662 [2024-12-06 04:09:08.091750] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:22.662 [2024-12-06 04:09:08.091994] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:22.662 [2024-12-06 04:09:08.092007] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.662 04:09:08 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 [2024-12-06 04:09:08.105827] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:22.662 request: 00:18:22.662 { 00:18:22.662 "ublk_id": 0, 00:18:22.662 "method": "ublk_stop_disk", 00:18:22.662 "req_id": 1 00:18:22.662 } 00:18:22.662 Got JSON-RPC error response 00:18:22.662 response: 00:18:22.662 { 00:18:22.662 "code": -19, 00:18:22.662 "message": "No such device" 00:18:22.662 } 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.662 04:09:08 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 [2024-12-06 04:09:08.123819] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:22.662 [2024-12-06 04:09:08.127503] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:22.662 [2024-12-06 04:09:08.127542] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.662 04:09:08 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.662 04:09:08 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:22.662 04:09:08 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.662 04:09:08 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:22.662 04:09:08 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:22.662 04:09:08 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:22.662 04:09:08 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.662 04:09:08 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:22.662 04:09:08 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:22.662 ************************************ 00:18:22.662 END TEST test_create_ublk 00:18:22.662 ************************************ 00:18:22.662 04:09:08 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:22.662 00:18:22.662 real 0m11.170s 00:18:22.662 user 0m0.614s 00:18:22.662 sys 0m1.432s 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.662 04:09:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 04:09:08 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:22.662 04:09:08 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:22.662 04:09:08 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.662 04:09:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 ************************************ 00:18:22.662 START TEST test_create_multi_ublk 00:18:22.662 ************************************ 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 [2024-12-06 04:09:08.638762] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:22.662 [2024-12-06 04:09:08.640992] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 [2024-12-06 04:09:08.902866] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:22.662 [2024-12-06 04:09:08.903187] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:22.662 [2024-12-06 04:09:08.903200] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:22.662 [2024-12-06 04:09:08.903208] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:22.662 [2024-12-06 04:09:08.926745] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:22.662 [2024-12-06 04:09:08.926779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:22.662 [2024-12-06 04:09:08.938743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:22.662 [2024-12-06 04:09:08.939283] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:22.662 [2024-12-06 04:09:08.974748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.662 04:09:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.662 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:22.662 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:22.662 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.662 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.662 [2024-12-06 04:09:09.180854] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:22.662 [2024-12-06 04:09:09.181162] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:22.662 [2024-12-06 04:09:09.181175] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:22.662 [2024-12-06 04:09:09.181181] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:22.662 [2024-12-06 04:09:09.188753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:22.662 [2024-12-06 04:09:09.188775] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:22.662 [2024-12-06 04:09:09.196738] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:22.663 [2024-12-06 04:09:09.197266] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:22.663 [2024-12-06 04:09:09.205774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.663 [2024-12-06 04:09:09.372840] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:22.663 [2024-12-06 04:09:09.373155] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:22.663 [2024-12-06 04:09:09.373168] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:22.663 [2024-12-06 04:09:09.373175] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:22.663 [2024-12-06 04:09:09.380758] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:22.663 [2024-12-06 04:09:09.380782] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:22.663 [2024-12-06 04:09:09.388745] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:22.663 [2024-12-06 04:09:09.389276] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:22.663 [2024-12-06 04:09:09.392551] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.663 [2024-12-06 04:09:09.552873] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:22.663 [2024-12-06 04:09:09.553195] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:22.663 [2024-12-06 04:09:09.553208] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:22.663 [2024-12-06 04:09:09.553214] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:22.663 [2024-12-06 04:09:09.561929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:22.663 [2024-12-06 04:09:09.561954] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:22.663 [2024-12-06 04:09:09.568768] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:22.663 [2024-12-06 04:09:09.569315] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:22.663 [2024-12-06 04:09:09.572182] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:22.663 { 00:18:22.663 "ublk_device": "/dev/ublkb0", 00:18:22.663 "id": 0, 00:18:22.663 "queue_depth": 512, 00:18:22.663 "num_queues": 4, 00:18:22.663 "bdev_name": "Malloc0" 00:18:22.663 }, 00:18:22.663 { 00:18:22.663 "ublk_device": "/dev/ublkb1", 00:18:22.663 "id": 1, 00:18:22.663 "queue_depth": 512, 00:18:22.663 "num_queues": 4, 00:18:22.663 "bdev_name": "Malloc1" 00:18:22.663 }, 00:18:22.663 { 00:18:22.663 "ublk_device": "/dev/ublkb2", 00:18:22.663 "id": 2, 00:18:22.663 "queue_depth": 512, 00:18:22.663 "num_queues": 4, 00:18:22.663 "bdev_name": "Malloc2" 00:18:22.663 }, 00:18:22.663 { 00:18:22.663 "ublk_device": "/dev/ublkb3", 00:18:22.663 "id": 3, 00:18:22.663 "queue_depth": 512, 00:18:22.663 "num_queues": 4, 00:18:22.663 "bdev_name": "Malloc3" 00:18:22.663 } 00:18:22.663 ]' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:22.663 04:09:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:22.663 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.922 [2024-12-06 04:09:10.228841] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:22.922 [2024-12-06 04:09:10.272785] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:22.922 [2024-12-06 04:09:10.273548] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:22.922 [2024-12-06 04:09:10.280753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:22.922 [2024-12-06 04:09:10.281010] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:22.922 [2024-12-06 04:09:10.281026] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.922 [2024-12-06 04:09:10.295841] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:22.922 [2024-12-06 04:09:10.334201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:22.922 [2024-12-06 04:09:10.335133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:22.922 [2024-12-06 04:09:10.343757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:22.922 [2024-12-06 04:09:10.343998] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:22.922 [2024-12-06 04:09:10.344012] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.922 [2024-12-06 04:09:10.358841] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:22.922 [2024-12-06 04:09:10.393201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:22.922 [2024-12-06 04:09:10.394117] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:22.922 [2024-12-06 04:09:10.399747] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:22.922 [2024-12-06 04:09:10.399981] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:22.922 [2024-12-06 04:09:10.399994] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.922 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.922 [2024-12-06 04:09:10.414851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:22.922 [2024-12-06 04:09:10.445193] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:22.922 [2024-12-06 04:09:10.446033] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:23.180 [2024-12-06 04:09:10.452756] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:23.180 [2024-12-06 04:09:10.453004] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:23.180 [2024-12-06 04:09:10.453017] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:23.180 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.180 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:23.180 [2024-12-06 04:09:10.651810] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:23.180 [2024-12-06 04:09:10.655472] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:23.180 [2024-12-06 04:09:10.655516] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:23.180 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:23.180 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:23.180 04:09:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:23.180 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.180 04:09:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:23.747 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.747 04:09:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:23.747 04:09:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:23.747 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.747 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.005 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.006 04:09:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:24.006 04:09:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:24.006 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.006 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.264 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.264 04:09:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:24.264 04:09:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:24.264 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.264 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.264 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.264 04:09:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:24.264 04:09:11 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:24.264 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.264 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.523 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.523 04:09:11 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:24.523 04:09:11 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:24.524 04:09:11 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:24.524 04:09:11 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:24.524 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.524 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.524 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.524 04:09:11 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:24.524 04:09:11 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:24.524 ************************************ 00:18:24.524 END TEST test_create_multi_ublk 00:18:24.524 ************************************ 00:18:24.524 04:09:11 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:24.524 00:18:24.524 real 0m3.249s 00:18:24.524 user 0m0.813s 00:18:24.524 sys 0m0.141s 00:18:24.524 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.524 04:09:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.524 04:09:11 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:24.524 04:09:11 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:24.524 04:09:11 ublk -- ublk/ublk.sh@130 -- # killprocess 73631 00:18:24.524 04:09:11 ublk -- common/autotest_common.sh@954 -- # '[' -z 73631 ']' 00:18:24.524 04:09:11 ublk -- common/autotest_common.sh@958 -- # kill -0 73631 00:18:24.524 04:09:11 ublk -- common/autotest_common.sh@959 -- # uname 00:18:24.524 04:09:11 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.524 04:09:11 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73631 00:18:24.524 killing process with pid 73631 00:18:24.524 04:09:11 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.524 04:09:11 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.524 04:09:11 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73631' 00:18:24.524 04:09:11 ublk -- common/autotest_common.sh@973 -- # kill 73631 00:18:24.524 04:09:11 ublk -- common/autotest_common.sh@978 -- # wait 73631 00:18:25.091 [2024-12-06 04:09:12.478277] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:25.091 [2024-12-06 04:09:12.478338] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:25.658 00:18:25.658 real 0m24.076s 00:18:25.658 user 0m34.446s 00:18:25.658 sys 0m9.573s 00:18:25.658 04:09:13 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.658 ************************************ 00:18:25.658 END TEST ublk 00:18:25.658 ************************************ 00:18:25.658 04:09:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.658 04:09:13 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:25.658 04:09:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:25.658 04:09:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.658 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:18:25.658 ************************************ 00:18:25.658 START TEST ublk_recovery 00:18:25.658 ************************************ 00:18:25.658 04:09:13 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:25.917 * Looking for test storage... 00:18:25.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.917 04:09:13 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:25.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.917 --rc genhtml_branch_coverage=1 00:18:25.917 --rc genhtml_function_coverage=1 00:18:25.917 --rc genhtml_legend=1 00:18:25.917 --rc geninfo_all_blocks=1 00:18:25.917 --rc geninfo_unexecuted_blocks=1 00:18:25.917 00:18:25.917 ' 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:25.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.917 --rc genhtml_branch_coverage=1 00:18:25.917 --rc genhtml_function_coverage=1 00:18:25.917 --rc genhtml_legend=1 00:18:25.917 --rc geninfo_all_blocks=1 00:18:25.917 --rc geninfo_unexecuted_blocks=1 00:18:25.917 00:18:25.917 ' 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:25.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.917 --rc genhtml_branch_coverage=1 00:18:25.917 --rc genhtml_function_coverage=1 00:18:25.917 --rc genhtml_legend=1 00:18:25.917 --rc geninfo_all_blocks=1 00:18:25.917 --rc geninfo_unexecuted_blocks=1 00:18:25.917 00:18:25.917 ' 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:25.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.917 --rc genhtml_branch_coverage=1 00:18:25.917 --rc genhtml_function_coverage=1 00:18:25.917 --rc genhtml_legend=1 00:18:25.917 --rc geninfo_all_blocks=1 00:18:25.917 --rc geninfo_unexecuted_blocks=1 00:18:25.917 00:18:25.917 ' 00:18:25.917 04:09:13 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:25.917 04:09:13 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:25.917 04:09:13 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:25.917 04:09:13 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:25.917 04:09:13 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:25.917 04:09:13 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:25.917 04:09:13 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:25.917 04:09:13 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:25.917 04:09:13 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:25.917 04:09:13 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:25.917 04:09:13 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74020 00:18:25.917 04:09:13 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.917 04:09:13 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:25.917 04:09:13 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74020 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74020 ']' 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.917 04:09:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.917 [2024-12-06 04:09:13.402549] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:18:25.917 [2024-12-06 04:09:13.402794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74020 ] 00:18:26.176 [2024-12-06 04:09:13.557606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:26.176 [2024-12-06 04:09:13.642729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.176 [2024-12-06 04:09:13.642767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.743 04:09:14 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.743 04:09:14 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:26.743 04:09:14 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:26.743 04:09:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.743 04:09:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.743 [2024-12-06 04:09:14.234735] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:26.743 [2024-12-06 04:09:14.236363] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:26.743 04:09:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.743 04:09:14 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:26.743 04:09:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.743 04:09:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:27.001 malloc0 00:18:27.001 04:09:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.001 04:09:14 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:27.001 04:09:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.001 04:09:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:27.001 [2024-12-06 04:09:14.322867] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:27.001 [2024-12-06 04:09:14.322959] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:27.001 [2024-12-06 04:09:14.322968] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:27.001 [2024-12-06 04:09:14.322975] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:27.001 [2024-12-06 04:09:14.331843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:27.001 [2024-12-06 04:09:14.331866] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:27.001 [2024-12-06 04:09:14.338745] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:27.001 [2024-12-06 04:09:14.338878] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:27.001 [2024-12-06 04:09:14.360750] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:27.001 1 00:18:27.001 04:09:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.001 04:09:14 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:27.936 04:09:15 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74051 00:18:27.936 04:09:15 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:27.936 04:09:15 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:28.194 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:28.194 fio-3.35 00:18:28.194 Starting 1 process 00:18:33.460 04:09:20 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74020 00:18:33.460 04:09:20 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:38.897 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74020 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:38.897 04:09:25 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74165 00:18:38.897 04:09:25 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:38.897 04:09:25 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74165 00:18:38.897 04:09:25 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74165 ']' 00:18:38.897 04:09:25 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.897 04:09:25 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:38.897 04:09:25 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.898 04:09:25 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.898 04:09:25 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.898 04:09:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.898 [2024-12-06 04:09:25.447424] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:18:38.898 [2024-12-06 04:09:25.447518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74165 ] 00:18:38.898 [2024-12-06 04:09:25.602345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:38.898 [2024-12-06 04:09:25.701799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.898 [2024-12-06 04:09:25.701810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.898 04:09:26 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.898 04:09:26 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:38.898 04:09:26 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:38.898 04:09:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.898 04:09:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.898 [2024-12-06 04:09:26.287739] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:38.898 [2024-12-06 04:09:26.289615] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:38.898 04:09:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.898 04:09:26 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:38.898 04:09:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.898 04:09:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.898 malloc0 00:18:38.898 04:09:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.898 04:09:26 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:38.898 04:09:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.898 04:09:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.898 [2024-12-06 04:09:26.391870] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:38.898 [2024-12-06 04:09:26.391913] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:38.898 [2024-12-06 04:09:26.391923] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:38.898 [2024-12-06 04:09:26.399773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:38.898 [2024-12-06 04:09:26.399793] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:18:38.898 [2024-12-06 04:09:26.399801] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:38.898 [2024-12-06 04:09:26.399876] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:38.898 1 00:18:38.898 04:09:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.898 04:09:26 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74051 00:18:38.898 [2024-12-06 04:09:26.407741] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:38.898 [2024-12-06 04:09:26.410342] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:38.898 [2024-12-06 04:09:26.415940] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:38.898 [2024-12-06 04:09:26.415963] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:35.112 00:19:35.112 fio_test: (groupid=0, jobs=1): err= 0: pid=74060: Fri Dec 6 04:10:15 2024 00:19:35.112 read: IOPS=27.6k, BW=108MiB/s (113MB/s)(6460MiB/60002msec) 00:19:35.112 slat (nsec): min=851, max=271177, avg=4888.25, stdev=1608.18 00:19:35.112 clat (usec): min=796, max=6048.6k, avg=2298.27, stdev=39049.67 00:19:35.112 lat (usec): min=800, max=6048.6k, avg=2303.15, stdev=39049.67 00:19:35.112 clat percentiles (usec): 00:19:35.112 | 1.00th=[ 1680], 5.00th=[ 1795], 10.00th=[ 1827], 20.00th=[ 1860], 00:19:35.112 | 30.00th=[ 1876], 40.00th=[ 1893], 50.00th=[ 1909], 60.00th=[ 1942], 00:19:35.112 | 70.00th=[ 1958], 80.00th=[ 2008], 90.00th=[ 2180], 95.00th=[ 2900], 00:19:35.112 | 99.00th=[ 4817], 99.50th=[ 5473], 99.90th=[ 6718], 99.95th=[ 7177], 00:19:35.112 | 99.99th=[12387] 00:19:35.112 bw ( KiB/s): min=16152, max=130032, per=100.00%, avg=121576.71, stdev=15106.15, samples=108 00:19:35.112 iops : min= 4038, max=32508, avg=30394.14, stdev=3776.53, samples=108 00:19:35.112 write: IOPS=27.5k, BW=108MiB/s (113MB/s)(6455MiB/60002msec); 0 zone resets 00:19:35.112 slat (nsec): min=891, max=1278.3k, avg=4947.91, stdev=2096.30 00:19:35.112 clat (usec): min=653, max=6048.6k, avg=2336.87, stdev=36126.93 00:19:35.112 lat (usec): min=657, max=6048.6k, avg=2341.82, stdev=36126.93 00:19:35.112 clat percentiles (usec): 00:19:35.112 | 1.00th=[ 1729], 5.00th=[ 1876], 10.00th=[ 1909], 20.00th=[ 1942], 00:19:35.112 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 2008], 60.00th=[ 2024], 00:19:35.112 | 70.00th=[ 2057], 80.00th=[ 2089], 90.00th=[ 2278], 95.00th=[ 2802], 00:19:35.112 | 99.00th=[ 4752], 99.50th=[ 5538], 99.90th=[ 6652], 99.95th=[ 7242], 00:19:35.112 | 99.99th=[12256] 00:19:35.112 bw ( KiB/s): min=15944, max=130000, per=100.00%, avg=121474.23, stdev=15149.63, samples=108 00:19:35.112 iops : min= 3986, max=32500, avg=30368.54, stdev=3787.41, samples=108 00:19:35.112 lat (usec) : 750=0.01%, 1000=0.01% 00:19:35.112 lat (msec) : 2=64.45%, 4=33.26%, 10=2.28%, 20=0.01%, >=2000=0.01% 00:19:35.112 cpu : usr=6.13%, sys=27.72%, ctx=110703, majf=0, minf=13 00:19:35.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:35.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:35.112 issued rwts: total=1653834,1652357,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:35.112 00:19:35.112 Run status group 0 (all jobs): 00:19:35.112 READ: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=6460MiB (6774MB), run=60002-60002msec 00:19:35.112 WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=6455MiB (6768MB), run=60002-60002msec 00:19:35.112 00:19:35.112 Disk stats (read/write): 00:19:35.112 ublkb1: ios=1651237/1649717, merge=0/0, ticks=3711251/3639921, in_queue=7351173, util=99.92% 00:19:35.112 04:10:15 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.112 [2024-12-06 04:10:15.620632] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:35.112 [2024-12-06 04:10:15.658761] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:35.112 [2024-12-06 04:10:15.658923] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:35.112 [2024-12-06 04:10:15.670739] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:35.112 [2024-12-06 04:10:15.670837] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:35.112 [2024-12-06 04:10:15.670847] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.112 04:10:15 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.112 [2024-12-06 04:10:15.674895] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:35.112 [2024-12-06 04:10:15.681735] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:35.112 [2024-12-06 04:10:15.681774] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.112 04:10:15 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:35.112 04:10:15 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:35.112 04:10:15 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74165 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74165 ']' 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74165 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74165 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74165' 00:19:35.112 killing process with pid 74165 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74165 00:19:35.112 04:10:15 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74165 00:19:35.112 [2024-12-06 04:10:16.759790] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:35.112 [2024-12-06 04:10:16.759839] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:35.112 00:19:35.112 real 1m4.323s 00:19:35.112 user 1m44.696s 00:19:35.112 sys 0m33.495s 00:19:35.112 04:10:17 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.112 04:10:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.112 ************************************ 00:19:35.112 END TEST ublk_recovery 00:19:35.112 ************************************ 00:19:35.112 04:10:17 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:19:35.112 04:10:17 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:35.112 04:10:17 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:35.112 04:10:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:35.112 04:10:17 -- common/autotest_common.sh@10 -- # set +x 00:19:35.112 04:10:17 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:35.112 04:10:17 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:35.112 04:10:17 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:35.112 04:10:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:35.112 04:10:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:35.112 04:10:17 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:35.112 04:10:17 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:35.112 04:10:17 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:35.112 04:10:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:35.112 04:10:17 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:19:35.112 04:10:17 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:35.112 04:10:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:35.112 04:10:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.112 04:10:17 -- common/autotest_common.sh@10 -- # set +x 00:19:35.112 ************************************ 00:19:35.112 START TEST ftl 00:19:35.112 ************************************ 00:19:35.112 04:10:17 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:35.112 * Looking for test storage... 00:19:35.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:35.112 04:10:17 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:35.112 04:10:17 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:19:35.112 04:10:17 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:35.112 04:10:17 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:35.112 04:10:17 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.112 04:10:17 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.112 04:10:17 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.112 04:10:17 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.112 04:10:17 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.112 04:10:17 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.112 04:10:17 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.112 04:10:17 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.112 04:10:17 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.112 04:10:17 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.112 04:10:17 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.113 04:10:17 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:35.113 04:10:17 ftl -- scripts/common.sh@345 -- # : 1 00:19:35.113 04:10:17 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.113 04:10:17 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.113 04:10:17 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:35.113 04:10:17 ftl -- scripts/common.sh@353 -- # local d=1 00:19:35.113 04:10:17 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.113 04:10:17 ftl -- scripts/common.sh@355 -- # echo 1 00:19:35.113 04:10:17 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.113 04:10:17 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:35.113 04:10:17 ftl -- scripts/common.sh@353 -- # local d=2 00:19:35.113 04:10:17 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.113 04:10:17 ftl -- scripts/common.sh@355 -- # echo 2 00:19:35.113 04:10:17 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.113 04:10:17 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.113 04:10:17 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.113 04:10:17 ftl -- scripts/common.sh@368 -- # return 0 00:19:35.113 04:10:17 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.113 04:10:17 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.113 --rc genhtml_branch_coverage=1 00:19:35.113 --rc genhtml_function_coverage=1 00:19:35.113 --rc genhtml_legend=1 00:19:35.113 --rc geninfo_all_blocks=1 00:19:35.113 --rc geninfo_unexecuted_blocks=1 00:19:35.113 00:19:35.113 ' 00:19:35.113 04:10:17 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.113 --rc genhtml_branch_coverage=1 00:19:35.113 --rc genhtml_function_coverage=1 00:19:35.113 --rc genhtml_legend=1 00:19:35.113 --rc geninfo_all_blocks=1 00:19:35.113 --rc geninfo_unexecuted_blocks=1 00:19:35.113 00:19:35.113 ' 00:19:35.113 04:10:17 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.113 --rc genhtml_branch_coverage=1 00:19:35.113 --rc genhtml_function_coverage=1 00:19:35.113 --rc genhtml_legend=1 00:19:35.113 --rc geninfo_all_blocks=1 00:19:35.113 --rc geninfo_unexecuted_blocks=1 00:19:35.113 00:19:35.113 ' 00:19:35.113 04:10:17 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.113 --rc genhtml_branch_coverage=1 00:19:35.113 --rc genhtml_function_coverage=1 00:19:35.113 --rc genhtml_legend=1 00:19:35.113 --rc geninfo_all_blocks=1 00:19:35.113 --rc geninfo_unexecuted_blocks=1 00:19:35.113 00:19:35.113 ' 00:19:35.113 04:10:17 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:35.113 04:10:17 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:35.113 04:10:17 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:35.113 04:10:17 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:35.113 04:10:17 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:35.113 04:10:17 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:35.113 04:10:17 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.113 04:10:17 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:35.113 04:10:17 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:35.113 04:10:17 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:35.113 04:10:17 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:35.113 04:10:17 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:35.113 04:10:17 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:35.113 04:10:17 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:35.113 04:10:17 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:35.113 04:10:17 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:35.113 04:10:17 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:35.113 04:10:17 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:35.113 04:10:17 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:35.113 04:10:17 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:35.113 04:10:17 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:35.113 04:10:17 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:35.113 04:10:17 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:35.113 04:10:17 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:35.113 04:10:17 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:35.113 04:10:17 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:35.113 04:10:17 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:35.113 04:10:17 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:35.113 04:10:17 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:35.113 04:10:17 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.113 04:10:17 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:35.113 04:10:17 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:35.113 04:10:17 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:35.113 04:10:17 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:35.113 04:10:17 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:35.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:35.113 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:35.113 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:35.113 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:35.113 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:35.113 04:10:18 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74970 00:19:35.113 04:10:18 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:35.113 04:10:18 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74970 00:19:35.113 04:10:18 ftl -- common/autotest_common.sh@835 -- # '[' -z 74970 ']' 00:19:35.113 04:10:18 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.113 04:10:18 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.113 04:10:18 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.113 04:10:18 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.113 04:10:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:35.113 [2024-12-06 04:10:18.213035] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:19:35.113 [2024-12-06 04:10:18.213314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74970 ] 00:19:35.113 [2024-12-06 04:10:18.373583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.113 [2024-12-06 04:10:18.489998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.113 04:10:18 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.113 04:10:18 ftl -- common/autotest_common.sh@868 -- # return 0 00:19:35.113 04:10:18 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:35.113 04:10:19 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@50 -- # break 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@63 -- # break 00:19:35.113 04:10:20 ftl -- ftl/ftl.sh@66 -- # killprocess 74970 00:19:35.113 04:10:20 ftl -- common/autotest_common.sh@954 -- # '[' -z 74970 ']' 00:19:35.113 04:10:20 ftl -- common/autotest_common.sh@958 -- # kill -0 74970 00:19:35.113 04:10:20 ftl -- common/autotest_common.sh@959 -- # uname 00:19:35.113 04:10:20 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.113 04:10:20 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74970 00:19:35.113 killing process with pid 74970 00:19:35.113 04:10:20 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.113 04:10:20 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.113 04:10:20 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74970' 00:19:35.113 04:10:20 ftl -- common/autotest_common.sh@973 -- # kill 74970 00:19:35.113 04:10:20 ftl -- common/autotest_common.sh@978 -- # wait 74970 00:19:35.113 04:10:22 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:35.113 04:10:22 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:35.113 04:10:22 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:35.113 04:10:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.113 04:10:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:35.113 ************************************ 00:19:35.113 START TEST ftl_fio_basic 00:19:35.113 ************************************ 00:19:35.113 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:35.113 * Looking for test storage... 00:19:35.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:35.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.114 --rc genhtml_branch_coverage=1 00:19:35.114 --rc genhtml_function_coverage=1 00:19:35.114 --rc genhtml_legend=1 00:19:35.114 --rc geninfo_all_blocks=1 00:19:35.114 --rc geninfo_unexecuted_blocks=1 00:19:35.114 00:19:35.114 ' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:35.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.114 --rc genhtml_branch_coverage=1 00:19:35.114 --rc genhtml_function_coverage=1 00:19:35.114 --rc genhtml_legend=1 00:19:35.114 --rc geninfo_all_blocks=1 00:19:35.114 --rc geninfo_unexecuted_blocks=1 00:19:35.114 00:19:35.114 ' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:35.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.114 --rc genhtml_branch_coverage=1 00:19:35.114 --rc genhtml_function_coverage=1 00:19:35.114 --rc genhtml_legend=1 00:19:35.114 --rc geninfo_all_blocks=1 00:19:35.114 --rc geninfo_unexecuted_blocks=1 00:19:35.114 00:19:35.114 ' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:35.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.114 --rc genhtml_branch_coverage=1 00:19:35.114 --rc genhtml_function_coverage=1 00:19:35.114 --rc genhtml_legend=1 00:19:35.114 --rc geninfo_all_blocks=1 00:19:35.114 --rc geninfo_unexecuted_blocks=1 00:19:35.114 00:19:35.114 ' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75102 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75102 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75102 ']' 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.114 04:10:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:35.114 [2024-12-06 04:10:22.605711] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:19:35.114 [2024-12-06 04:10:22.606006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75102 ] 00:19:35.373 [2024-12-06 04:10:22.761876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:35.373 [2024-12-06 04:10:22.846138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.373 [2024-12-06 04:10:22.846304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.373 [2024-12-06 04:10:22.846318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.940 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.940 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:19:35.940 04:10:23 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:35.940 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:35.940 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:35.940 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:35.940 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:35.940 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:36.199 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:36.199 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:36.199 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:36.199 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:36.199 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:36.199 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:36.199 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:36.199 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:36.457 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:36.457 { 00:19:36.457 "name": "nvme0n1", 00:19:36.457 "aliases": [ 00:19:36.457 "3871b620-046c-4df0-a508-0ac4d3683ffa" 00:19:36.457 ], 00:19:36.457 "product_name": "NVMe disk", 00:19:36.457 "block_size": 4096, 00:19:36.457 "num_blocks": 1310720, 00:19:36.457 "uuid": "3871b620-046c-4df0-a508-0ac4d3683ffa", 00:19:36.457 "numa_id": -1, 00:19:36.457 "assigned_rate_limits": { 00:19:36.457 "rw_ios_per_sec": 0, 00:19:36.457 "rw_mbytes_per_sec": 0, 00:19:36.457 "r_mbytes_per_sec": 0, 00:19:36.457 "w_mbytes_per_sec": 0 00:19:36.457 }, 00:19:36.457 "claimed": false, 00:19:36.457 "zoned": false, 00:19:36.457 "supported_io_types": { 00:19:36.457 "read": true, 00:19:36.457 "write": true, 00:19:36.457 "unmap": true, 00:19:36.457 "flush": true, 00:19:36.457 "reset": true, 00:19:36.457 "nvme_admin": true, 00:19:36.457 "nvme_io": true, 00:19:36.457 "nvme_io_md": false, 00:19:36.457 "write_zeroes": true, 00:19:36.457 "zcopy": false, 00:19:36.457 "get_zone_info": false, 00:19:36.457 "zone_management": false, 00:19:36.457 "zone_append": false, 00:19:36.457 "compare": true, 00:19:36.457 "compare_and_write": false, 00:19:36.457 "abort": true, 00:19:36.457 "seek_hole": false, 00:19:36.457 "seek_data": false, 00:19:36.457 "copy": true, 00:19:36.457 "nvme_iov_md": false 00:19:36.457 }, 00:19:36.457 "driver_specific": { 00:19:36.457 "nvme": [ 00:19:36.458 { 00:19:36.458 "pci_address": "0000:00:11.0", 00:19:36.458 "trid": { 00:19:36.458 "trtype": "PCIe", 00:19:36.458 "traddr": "0000:00:11.0" 00:19:36.458 }, 00:19:36.458 "ctrlr_data": { 00:19:36.458 "cntlid": 0, 00:19:36.458 "vendor_id": "0x1b36", 00:19:36.458 "model_number": "QEMU NVMe Ctrl", 00:19:36.458 "serial_number": "12341", 00:19:36.458 "firmware_revision": "8.0.0", 00:19:36.458 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:36.458 "oacs": { 00:19:36.458 "security": 0, 00:19:36.458 "format": 1, 00:19:36.458 "firmware": 0, 00:19:36.458 "ns_manage": 1 00:19:36.458 }, 00:19:36.458 "multi_ctrlr": false, 00:19:36.458 "ana_reporting": false 00:19:36.458 }, 00:19:36.458 "vs": { 00:19:36.458 "nvme_version": "1.4" 00:19:36.458 }, 00:19:36.458 "ns_data": { 00:19:36.458 "id": 1, 00:19:36.458 "can_share": false 00:19:36.458 } 00:19:36.458 } 00:19:36.458 ], 00:19:36.458 "mp_policy": "active_passive" 00:19:36.458 } 00:19:36.458 } 00:19:36.458 ]' 00:19:36.458 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:36.458 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:36.458 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:36.458 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:36.458 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:36.458 04:10:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:19:36.458 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:36.458 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:36.458 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:36.458 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:36.715 04:10:23 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:36.715 04:10:24 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:36.715 04:10:24 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:36.975 04:10:24 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=8793a231-f2e0-4266-b363-99d039d928e4 00:19:36.975 04:10:24 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8793a231-f2e0-4266-b363-99d039d928e4 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=ef57fba5-30ae-49ce-acdb-bf5a256a6e76 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ef57fba5-30ae-49ce-acdb-bf5a256a6e76 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=ef57fba5-30ae-49ce-acdb-bf5a256a6e76 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size ef57fba5-30ae-49ce-acdb-bf5a256a6e76 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=ef57fba5-30ae-49ce-acdb-bf5a256a6e76 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef57fba5-30ae-49ce-acdb-bf5a256a6e76 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:37.233 { 00:19:37.233 "name": "ef57fba5-30ae-49ce-acdb-bf5a256a6e76", 00:19:37.233 "aliases": [ 00:19:37.233 "lvs/nvme0n1p0" 00:19:37.233 ], 00:19:37.233 "product_name": "Logical Volume", 00:19:37.233 "block_size": 4096, 00:19:37.233 "num_blocks": 26476544, 00:19:37.233 "uuid": "ef57fba5-30ae-49ce-acdb-bf5a256a6e76", 00:19:37.233 "assigned_rate_limits": { 00:19:37.233 "rw_ios_per_sec": 0, 00:19:37.233 "rw_mbytes_per_sec": 0, 00:19:37.233 "r_mbytes_per_sec": 0, 00:19:37.233 "w_mbytes_per_sec": 0 00:19:37.233 }, 00:19:37.233 "claimed": false, 00:19:37.233 "zoned": false, 00:19:37.233 "supported_io_types": { 00:19:37.233 "read": true, 00:19:37.233 "write": true, 00:19:37.233 "unmap": true, 00:19:37.233 "flush": false, 00:19:37.233 "reset": true, 00:19:37.233 "nvme_admin": false, 00:19:37.233 "nvme_io": false, 00:19:37.233 "nvme_io_md": false, 00:19:37.233 "write_zeroes": true, 00:19:37.233 "zcopy": false, 00:19:37.233 "get_zone_info": false, 00:19:37.233 "zone_management": false, 00:19:37.233 "zone_append": false, 00:19:37.233 "compare": false, 00:19:37.233 "compare_and_write": false, 00:19:37.233 "abort": false, 00:19:37.233 "seek_hole": true, 00:19:37.233 "seek_data": true, 00:19:37.233 "copy": false, 00:19:37.233 "nvme_iov_md": false 00:19:37.233 }, 00:19:37.233 "driver_specific": { 00:19:37.233 "lvol": { 00:19:37.233 "lvol_store_uuid": "8793a231-f2e0-4266-b363-99d039d928e4", 00:19:37.233 "base_bdev": "nvme0n1", 00:19:37.233 "thin_provision": true, 00:19:37.233 "num_allocated_clusters": 0, 00:19:37.233 "snapshot": false, 00:19:37.233 "clone": false, 00:19:37.233 "esnap_clone": false 00:19:37.233 } 00:19:37.233 } 00:19:37.233 } 00:19:37.233 ]' 00:19:37.233 04:10:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:37.491 04:10:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:37.491 04:10:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:37.491 04:10:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:37.491 04:10:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:37.491 04:10:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:37.491 04:10:24 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:37.491 04:10:24 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:37.491 04:10:24 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:37.761 04:10:25 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:37.761 04:10:25 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:37.761 04:10:25 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size ef57fba5-30ae-49ce-acdb-bf5a256a6e76 00:19:37.761 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=ef57fba5-30ae-49ce-acdb-bf5a256a6e76 00:19:37.761 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:37.761 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:37.761 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:37.761 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef57fba5-30ae-49ce-acdb-bf5a256a6e76 00:19:37.761 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:37.761 { 00:19:37.761 "name": "ef57fba5-30ae-49ce-acdb-bf5a256a6e76", 00:19:37.761 "aliases": [ 00:19:37.761 "lvs/nvme0n1p0" 00:19:37.761 ], 00:19:37.761 "product_name": "Logical Volume", 00:19:37.761 "block_size": 4096, 00:19:37.761 "num_blocks": 26476544, 00:19:37.761 "uuid": "ef57fba5-30ae-49ce-acdb-bf5a256a6e76", 00:19:37.761 "assigned_rate_limits": { 00:19:37.761 "rw_ios_per_sec": 0, 00:19:37.761 "rw_mbytes_per_sec": 0, 00:19:37.761 "r_mbytes_per_sec": 0, 00:19:37.761 "w_mbytes_per_sec": 0 00:19:37.761 }, 00:19:37.761 "claimed": false, 00:19:37.761 "zoned": false, 00:19:37.761 "supported_io_types": { 00:19:37.761 "read": true, 00:19:37.761 "write": true, 00:19:37.761 "unmap": true, 00:19:37.761 "flush": false, 00:19:37.761 "reset": true, 00:19:37.761 "nvme_admin": false, 00:19:37.761 "nvme_io": false, 00:19:37.761 "nvme_io_md": false, 00:19:37.761 "write_zeroes": true, 00:19:37.761 "zcopy": false, 00:19:37.761 "get_zone_info": false, 00:19:37.761 "zone_management": false, 00:19:37.761 "zone_append": false, 00:19:37.761 "compare": false, 00:19:37.761 "compare_and_write": false, 00:19:37.762 "abort": false, 00:19:37.762 "seek_hole": true, 00:19:37.762 "seek_data": true, 00:19:37.762 "copy": false, 00:19:37.762 "nvme_iov_md": false 00:19:37.762 }, 00:19:37.762 "driver_specific": { 00:19:37.762 "lvol": { 00:19:37.762 "lvol_store_uuid": "8793a231-f2e0-4266-b363-99d039d928e4", 00:19:37.762 "base_bdev": "nvme0n1", 00:19:37.762 "thin_provision": true, 00:19:37.762 "num_allocated_clusters": 0, 00:19:37.762 "snapshot": false, 00:19:37.762 "clone": false, 00:19:37.762 "esnap_clone": false 00:19:37.762 } 00:19:37.762 } 00:19:37.762 } 00:19:37.762 ]' 00:19:37.762 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:37.762 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:37.762 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:38.020 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size ef57fba5-30ae-49ce-acdb-bf5a256a6e76 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=ef57fba5-30ae-49ce-acdb-bf5a256a6e76 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:38.020 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef57fba5-30ae-49ce-acdb-bf5a256a6e76 00:19:38.276 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:38.276 { 00:19:38.276 "name": "ef57fba5-30ae-49ce-acdb-bf5a256a6e76", 00:19:38.276 "aliases": [ 00:19:38.276 "lvs/nvme0n1p0" 00:19:38.276 ], 00:19:38.276 "product_name": "Logical Volume", 00:19:38.276 "block_size": 4096, 00:19:38.276 "num_blocks": 26476544, 00:19:38.276 "uuid": "ef57fba5-30ae-49ce-acdb-bf5a256a6e76", 00:19:38.276 "assigned_rate_limits": { 00:19:38.276 "rw_ios_per_sec": 0, 00:19:38.276 "rw_mbytes_per_sec": 0, 00:19:38.276 "r_mbytes_per_sec": 0, 00:19:38.276 "w_mbytes_per_sec": 0 00:19:38.276 }, 00:19:38.276 "claimed": false, 00:19:38.276 "zoned": false, 00:19:38.276 "supported_io_types": { 00:19:38.276 "read": true, 00:19:38.276 "write": true, 00:19:38.276 "unmap": true, 00:19:38.276 "flush": false, 00:19:38.276 "reset": true, 00:19:38.276 "nvme_admin": false, 00:19:38.276 "nvme_io": false, 00:19:38.276 "nvme_io_md": false, 00:19:38.276 "write_zeroes": true, 00:19:38.276 "zcopy": false, 00:19:38.276 "get_zone_info": false, 00:19:38.277 "zone_management": false, 00:19:38.277 "zone_append": false, 00:19:38.277 "compare": false, 00:19:38.277 "compare_and_write": false, 00:19:38.277 "abort": false, 00:19:38.277 "seek_hole": true, 00:19:38.277 "seek_data": true, 00:19:38.277 "copy": false, 00:19:38.277 "nvme_iov_md": false 00:19:38.277 }, 00:19:38.277 "driver_specific": { 00:19:38.277 "lvol": { 00:19:38.277 "lvol_store_uuid": "8793a231-f2e0-4266-b363-99d039d928e4", 00:19:38.277 "base_bdev": "nvme0n1", 00:19:38.277 "thin_provision": true, 00:19:38.277 "num_allocated_clusters": 0, 00:19:38.277 "snapshot": false, 00:19:38.277 "clone": false, 00:19:38.277 "esnap_clone": false 00:19:38.277 } 00:19:38.277 } 00:19:38.277 } 00:19:38.277 ]' 00:19:38.277 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:38.277 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:38.277 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:38.277 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:38.277 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:38.277 04:10:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:38.277 04:10:25 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:38.277 04:10:25 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:38.277 04:10:25 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ef57fba5-30ae-49ce-acdb-bf5a256a6e76 -c nvc0n1p0 --l2p_dram_limit 60 00:19:38.534 [2024-12-06 04:10:25.825709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.534 [2024-12-06 04:10:25.825762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:38.534 [2024-12-06 04:10:25.825776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:38.534 [2024-12-06 04:10:25.825783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.534 [2024-12-06 04:10:25.825844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.534 [2024-12-06 04:10:25.825853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.534 [2024-12-06 04:10:25.825861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:38.534 [2024-12-06 04:10:25.825867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.534 [2024-12-06 04:10:25.825907] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:38.534 [2024-12-06 04:10:25.826524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:38.534 [2024-12-06 04:10:25.826540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.534 [2024-12-06 04:10:25.826546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.534 [2024-12-06 04:10:25.826554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:19:38.535 [2024-12-06 04:10:25.826560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.535 [2024-12-06 04:10:25.826605] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d019498e-3fd5-416b-b42a-92259f584e3f 00:19:38.535 [2024-12-06 04:10:25.827630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.535 [2024-12-06 04:10:25.827659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:38.535 [2024-12-06 04:10:25.827669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:38.535 [2024-12-06 04:10:25.827677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.535 [2024-12-06 04:10:25.832398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.535 [2024-12-06 04:10:25.832426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.535 [2024-12-06 04:10:25.832434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.612 ms 00:19:38.535 [2024-12-06 04:10:25.832441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.535 [2024-12-06 04:10:25.832536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.535 [2024-12-06 04:10:25.832545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.535 [2024-12-06 04:10:25.832551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:38.535 [2024-12-06 04:10:25.832560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.535 [2024-12-06 04:10:25.832610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.535 [2024-12-06 04:10:25.832619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:38.535 [2024-12-06 04:10:25.832625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:38.535 [2024-12-06 04:10:25.832632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.535 [2024-12-06 04:10:25.832659] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.535 [2024-12-06 04:10:25.835522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.535 [2024-12-06 04:10:25.835547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.535 [2024-12-06 04:10:25.835556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.864 ms 00:19:38.535 [2024-12-06 04:10:25.835564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.535 [2024-12-06 04:10:25.835608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.535 [2024-12-06 04:10:25.835614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:38.535 [2024-12-06 04:10:25.835622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:38.535 [2024-12-06 04:10:25.835627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.535 [2024-12-06 04:10:25.835666] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:38.535 [2024-12-06 04:10:25.835796] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:38.535 [2024-12-06 04:10:25.835809] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:38.535 [2024-12-06 04:10:25.835817] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:38.535 [2024-12-06 04:10:25.835828] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:38.535 [2024-12-06 04:10:25.835835] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:38.535 [2024-12-06 04:10:25.835842] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:38.535 [2024-12-06 04:10:25.835848] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:38.535 [2024-12-06 04:10:25.835855] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:38.535 [2024-12-06 04:10:25.835860] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:38.535 [2024-12-06 04:10:25.835867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.535 [2024-12-06 04:10:25.835874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:38.535 [2024-12-06 04:10:25.835881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:19:38.535 [2024-12-06 04:10:25.835887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.535 [2024-12-06 04:10:25.835964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.535 [2024-12-06 04:10:25.835970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:38.535 [2024-12-06 04:10:25.835977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:38.535 [2024-12-06 04:10:25.835983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.535 [2024-12-06 04:10:25.836087] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:38.535 [2024-12-06 04:10:25.836097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:38.535 [2024-12-06 04:10:25.836106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.535 [2024-12-06 04:10:25.836112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:38.535 [2024-12-06 04:10:25.836124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:38.535 [2024-12-06 04:10:25.836137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:38.535 [2024-12-06 04:10:25.836144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.535 [2024-12-06 04:10:25.836156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:38.535 [2024-12-06 04:10:25.836160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:38.535 [2024-12-06 04:10:25.836167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.535 [2024-12-06 04:10:25.836172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:38.535 [2024-12-06 04:10:25.836179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:38.535 [2024-12-06 04:10:25.836184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:38.535 [2024-12-06 04:10:25.836198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:38.535 [2024-12-06 04:10:25.836204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:38.535 [2024-12-06 04:10:25.836215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.535 [2024-12-06 04:10:25.836227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:38.535 [2024-12-06 04:10:25.836232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.535 [2024-12-06 04:10:25.836243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:38.535 [2024-12-06 04:10:25.836249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.535 [2024-12-06 04:10:25.836260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:38.535 [2024-12-06 04:10:25.836265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.535 [2024-12-06 04:10:25.836276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:38.535 [2024-12-06 04:10:25.836285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.535 [2024-12-06 04:10:25.836306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:38.535 [2024-12-06 04:10:25.836312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:38.535 [2024-12-06 04:10:25.836321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.535 [2024-12-06 04:10:25.836327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:38.535 [2024-12-06 04:10:25.836333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:38.535 [2024-12-06 04:10:25.836337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:38.535 [2024-12-06 04:10:25.836349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:38.535 [2024-12-06 04:10:25.836355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836359] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:38.535 [2024-12-06 04:10:25.836366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:38.535 [2024-12-06 04:10:25.836372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.535 [2024-12-06 04:10:25.836378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.535 [2024-12-06 04:10:25.836384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:38.535 [2024-12-06 04:10:25.836394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:38.535 [2024-12-06 04:10:25.836399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:38.535 [2024-12-06 04:10:25.836405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:38.535 [2024-12-06 04:10:25.836410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:38.535 [2024-12-06 04:10:25.836417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:38.535 [2024-12-06 04:10:25.836423] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:38.535 [2024-12-06 04:10:25.836431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.535 [2024-12-06 04:10:25.836438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:38.535 [2024-12-06 04:10:25.836444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:38.535 [2024-12-06 04:10:25.836450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:38.536 [2024-12-06 04:10:25.836458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:38.536 [2024-12-06 04:10:25.836463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:38.536 [2024-12-06 04:10:25.836470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:38.536 [2024-12-06 04:10:25.836476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:38.536 [2024-12-06 04:10:25.836483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:38.536 [2024-12-06 04:10:25.836488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:38.536 [2024-12-06 04:10:25.836496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:38.536 [2024-12-06 04:10:25.836501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:38.536 [2024-12-06 04:10:25.836508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:38.536 [2024-12-06 04:10:25.836514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:38.536 [2024-12-06 04:10:25.836522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:38.536 [2024-12-06 04:10:25.836528] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:38.536 [2024-12-06 04:10:25.836535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.536 [2024-12-06 04:10:25.836543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:38.536 [2024-12-06 04:10:25.836550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:38.536 [2024-12-06 04:10:25.836555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:38.536 [2024-12-06 04:10:25.836562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:38.536 [2024-12-06 04:10:25.836568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.536 [2024-12-06 04:10:25.836575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:38.536 [2024-12-06 04:10:25.836580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:19:38.536 [2024-12-06 04:10:25.836587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.536 [2024-12-06 04:10:25.836668] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:38.536 [2024-12-06 04:10:25.836681] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:41.060 [2024-12-06 04:10:28.398209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.060 [2024-12-06 04:10:28.398465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:41.060 [2024-12-06 04:10:28.398627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2561.529 ms 00:19:41.060 [2024-12-06 04:10:28.398658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-06 04:10:28.432605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.060 [2024-12-06 04:10:28.432825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.060 [2024-12-06 04:10:28.432846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.449 ms 00:19:41.060 [2024-12-06 04:10:28.432856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-06 04:10:28.433028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.060 [2024-12-06 04:10:28.433041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:41.060 [2024-12-06 04:10:28.433050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:41.060 [2024-12-06 04:10:28.433061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-06 04:10:28.472905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.060 [2024-12-06 04:10:28.473068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.060 [2024-12-06 04:10:28.473135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.800 ms 00:19:41.060 [2024-12-06 04:10:28.473163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-06 04:10:28.473223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.060 [2024-12-06 04:10:28.473247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.060 [2024-12-06 04:10:28.473269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:41.060 [2024-12-06 04:10:28.473289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-06 04:10:28.473787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.060 [2024-12-06 04:10:28.473892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.060 [2024-12-06 04:10:28.473949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:19:41.060 [2024-12-06 04:10:28.473977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-06 04:10:28.474138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.060 [2024-12-06 04:10:28.474164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.060 [2024-12-06 04:10:28.474217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:19:41.060 [2024-12-06 04:10:28.474244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-06 04:10:28.488653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.060 [2024-12-06 04:10:28.488796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.060 [2024-12-06 04:10:28.488850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.373 ms 00:19:41.060 [2024-12-06 04:10:28.488966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.060 [2024-12-06 04:10:28.500397] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:41.060 [2024-12-06 04:10:28.514905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.060 [2024-12-06 04:10:28.514937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:41.060 [2024-12-06 04:10:28.514954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.816 ms 00:19:41.061 [2024-12-06 04:10:28.514962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.061 [2024-12-06 04:10:28.564153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.061 [2024-12-06 04:10:28.564199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:41.061 [2024-12-06 04:10:28.564216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.157 ms 00:19:41.061 [2024-12-06 04:10:28.564224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.061 [2024-12-06 04:10:28.564406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.061 [2024-12-06 04:10:28.564421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:41.061 [2024-12-06 04:10:28.564434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:19:41.061 [2024-12-06 04:10:28.564442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.319 [2024-12-06 04:10:28.587404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.319 [2024-12-06 04:10:28.587452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:41.319 [2024-12-06 04:10:28.587465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.924 ms 00:19:41.319 [2024-12-06 04:10:28.587473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.319 [2024-12-06 04:10:28.609950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.319 [2024-12-06 04:10:28.610070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:41.319 [2024-12-06 04:10:28.610090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.435 ms 00:19:41.319 [2024-12-06 04:10:28.610097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.319 [2024-12-06 04:10:28.610682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.319 [2024-12-06 04:10:28.610700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:41.319 [2024-12-06 04:10:28.610710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:19:41.319 [2024-12-06 04:10:28.610736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.319 [2024-12-06 04:10:28.672638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.319 [2024-12-06 04:10:28.672677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:41.319 [2024-12-06 04:10:28.672692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.861 ms 00:19:41.319 [2024-12-06 04:10:28.672703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.319 [2024-12-06 04:10:28.696911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.319 [2024-12-06 04:10:28.696946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:41.319 [2024-12-06 04:10:28.696960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.124 ms 00:19:41.319 [2024-12-06 04:10:28.696968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.319 [2024-12-06 04:10:28.720301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.319 [2024-12-06 04:10:28.720421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:41.319 [2024-12-06 04:10:28.720441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.302 ms 00:19:41.319 [2024-12-06 04:10:28.720449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.319 [2024-12-06 04:10:28.743363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.319 [2024-12-06 04:10:28.743467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:41.319 [2024-12-06 04:10:28.743522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.884 ms 00:19:41.319 [2024-12-06 04:10:28.743564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.319 [2024-12-06 04:10:28.743618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.319 [2024-12-06 04:10:28.743663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:41.319 [2024-12-06 04:10:28.743694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:41.319 [2024-12-06 04:10:28.743759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.319 [2024-12-06 04:10:28.743853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.319 [2024-12-06 04:10:28.743923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:41.319 [2024-12-06 04:10:28.743979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:41.319 [2024-12-06 04:10:28.744001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.319 [2024-12-06 04:10:28.745013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2918.880 ms, result 0 00:19:41.319 { 00:19:41.319 "name": "ftl0", 00:19:41.319 "uuid": "d019498e-3fd5-416b-b42a-92259f584e3f" 00:19:41.319 } 00:19:41.319 04:10:28 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:41.319 04:10:28 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:41.319 04:10:28 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:41.319 04:10:28 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:19:41.319 04:10:28 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:41.319 04:10:28 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:41.319 04:10:28 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:41.577 04:10:28 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:41.835 [ 00:19:41.835 { 00:19:41.835 "name": "ftl0", 00:19:41.835 "aliases": [ 00:19:41.835 "d019498e-3fd5-416b-b42a-92259f584e3f" 00:19:41.835 ], 00:19:41.835 "product_name": "FTL disk", 00:19:41.835 "block_size": 4096, 00:19:41.835 "num_blocks": 20971520, 00:19:41.835 "uuid": "d019498e-3fd5-416b-b42a-92259f584e3f", 00:19:41.835 "assigned_rate_limits": { 00:19:41.835 "rw_ios_per_sec": 0, 00:19:41.835 "rw_mbytes_per_sec": 0, 00:19:41.835 "r_mbytes_per_sec": 0, 00:19:41.835 "w_mbytes_per_sec": 0 00:19:41.835 }, 00:19:41.835 "claimed": false, 00:19:41.835 "zoned": false, 00:19:41.835 "supported_io_types": { 00:19:41.835 "read": true, 00:19:41.835 "write": true, 00:19:41.835 "unmap": true, 00:19:41.835 "flush": true, 00:19:41.835 "reset": false, 00:19:41.835 "nvme_admin": false, 00:19:41.835 "nvme_io": false, 00:19:41.835 "nvme_io_md": false, 00:19:41.835 "write_zeroes": true, 00:19:41.835 "zcopy": false, 00:19:41.835 "get_zone_info": false, 00:19:41.835 "zone_management": false, 00:19:41.835 "zone_append": false, 00:19:41.835 "compare": false, 00:19:41.835 "compare_and_write": false, 00:19:41.835 "abort": false, 00:19:41.835 "seek_hole": false, 00:19:41.835 "seek_data": false, 00:19:41.835 "copy": false, 00:19:41.835 "nvme_iov_md": false 00:19:41.835 }, 00:19:41.835 "driver_specific": { 00:19:41.835 "ftl": { 00:19:41.835 "base_bdev": "ef57fba5-30ae-49ce-acdb-bf5a256a6e76", 00:19:41.835 "cache": "nvc0n1p0" 00:19:41.835 } 00:19:41.835 } 00:19:41.835 } 00:19:41.835 ] 00:19:41.835 04:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:19:41.835 04:10:29 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:41.835 04:10:29 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:42.092 04:10:29 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:42.092 04:10:29 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:42.092 [2024-12-06 04:10:29.557418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.092 [2024-12-06 04:10:29.557601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:42.092 [2024-12-06 04:10:29.557656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:42.092 [2024-12-06 04:10:29.557683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.092 [2024-12-06 04:10:29.557745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:42.092 [2024-12-06 04:10:29.560382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.092 [2024-12-06 04:10:29.560487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:42.092 [2024-12-06 04:10:29.560558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:19:42.092 [2024-12-06 04:10:29.560580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.092 [2024-12-06 04:10:29.560985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.092 [2024-12-06 04:10:29.561057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:42.092 [2024-12-06 04:10:29.561104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:19:42.092 [2024-12-06 04:10:29.561125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.092 [2024-12-06 04:10:29.564375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.092 [2024-12-06 04:10:29.564448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:42.092 [2024-12-06 04:10:29.564496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:19:42.092 [2024-12-06 04:10:29.564506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.092 [2024-12-06 04:10:29.570770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.092 [2024-12-06 04:10:29.570858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:42.092 [2024-12-06 04:10:29.570915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.237 ms 00:19:42.092 [2024-12-06 04:10:29.570937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.092 [2024-12-06 04:10:29.594021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.092 [2024-12-06 04:10:29.594130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:42.092 [2024-12-06 04:10:29.594197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.990 ms 00:19:42.092 [2024-12-06 04:10:29.594219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.092 [2024-12-06 04:10:29.608921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.092 [2024-12-06 04:10:29.609031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:42.092 [2024-12-06 04:10:29.609093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.621 ms 00:19:42.092 [2024-12-06 04:10:29.609116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.092 [2024-12-06 04:10:29.609300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.092 [2024-12-06 04:10:29.609327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:42.092 [2024-12-06 04:10:29.609349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:42.092 [2024-12-06 04:10:29.609402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.351 [2024-12-06 04:10:29.632079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.351 [2024-12-06 04:10:29.632183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:42.351 [2024-12-06 04:10:29.632235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.639 ms 00:19:42.351 [2024-12-06 04:10:29.632257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.351 [2024-12-06 04:10:29.654040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.351 [2024-12-06 04:10:29.654143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:42.351 [2024-12-06 04:10:29.654226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.735 ms 00:19:42.351 [2024-12-06 04:10:29.654248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.351 [2024-12-06 04:10:29.676212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.351 [2024-12-06 04:10:29.676315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:42.351 [2024-12-06 04:10:29.676368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.916 ms 00:19:42.351 [2024-12-06 04:10:29.676390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.351 [2024-12-06 04:10:29.698105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.351 [2024-12-06 04:10:29.698208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:42.351 [2024-12-06 04:10:29.698273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.624 ms 00:19:42.351 [2024-12-06 04:10:29.698296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.351 [2024-12-06 04:10:29.698342] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:42.351 [2024-12-06 04:10:29.698472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.698518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.698547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.698577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.698606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.698637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.698707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.698805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.698859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.698919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.698971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.699978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.700006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.700069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.700099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:42.351 [2024-12-06 04:10:29.700131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.700970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:42.352 [2024-12-06 04:10:29.701775] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:42.352 [2024-12-06 04:10:29.701785] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d019498e-3fd5-416b-b42a-92259f584e3f 00:19:42.352 [2024-12-06 04:10:29.701795] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:42.352 [2024-12-06 04:10:29.701805] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:42.352 [2024-12-06 04:10:29.701811] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:42.352 [2024-12-06 04:10:29.701823] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:42.352 [2024-12-06 04:10:29.701830] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:42.352 [2024-12-06 04:10:29.701839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:42.352 [2024-12-06 04:10:29.701846] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:42.352 [2024-12-06 04:10:29.701853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:42.352 [2024-12-06 04:10:29.701859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:42.352 [2024-12-06 04:10:29.701868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 04:10:29.701875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:42.352 [2024-12-06 04:10:29.701887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.527 ms 00:19:42.352 [2024-12-06 04:10:29.701895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.352 [2024-12-06 04:10:29.714207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 04:10:29.714240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:42.352 [2024-12-06 04:10:29.714252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.273 ms 00:19:42.352 [2024-12-06 04:10:29.714260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.352 [2024-12-06 04:10:29.714639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.353 [2024-12-06 04:10:29.714650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:42.353 [2024-12-06 04:10:29.714660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:19:42.353 [2024-12-06 04:10:29.714667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.353 [2024-12-06 04:10:29.758116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.353 [2024-12-06 04:10:29.758154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:42.353 [2024-12-06 04:10:29.758166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.353 [2024-12-06 04:10:29.758173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.353 [2024-12-06 04:10:29.758232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.353 [2024-12-06 04:10:29.758241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:42.353 [2024-12-06 04:10:29.758250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.353 [2024-12-06 04:10:29.758258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.353 [2024-12-06 04:10:29.758350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.353 [2024-12-06 04:10:29.758363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:42.353 [2024-12-06 04:10:29.758372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.353 [2024-12-06 04:10:29.758380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.353 [2024-12-06 04:10:29.758408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.353 [2024-12-06 04:10:29.758415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:42.353 [2024-12-06 04:10:29.758424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.353 [2024-12-06 04:10:29.758431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.353 [2024-12-06 04:10:29.838533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.353 [2024-12-06 04:10:29.838574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:42.353 [2024-12-06 04:10:29.838586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.353 [2024-12-06 04:10:29.838594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.610 [2024-12-06 04:10:29.900393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.610 [2024-12-06 04:10:29.900535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:42.610 [2024-12-06 04:10:29.900555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.610 [2024-12-06 04:10:29.900563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.610 [2024-12-06 04:10:29.900639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.610 [2024-12-06 04:10:29.900648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:42.610 [2024-12-06 04:10:29.900660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.610 [2024-12-06 04:10:29.900668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.610 [2024-12-06 04:10:29.900761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.610 [2024-12-06 04:10:29.900771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:42.610 [2024-12-06 04:10:29.900781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.610 [2024-12-06 04:10:29.900788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.610 [2024-12-06 04:10:29.900893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.610 [2024-12-06 04:10:29.900906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:42.610 [2024-12-06 04:10:29.900916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.610 [2024-12-06 04:10:29.900925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.610 [2024-12-06 04:10:29.900976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.610 [2024-12-06 04:10:29.900989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:42.610 [2024-12-06 04:10:29.900998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.610 [2024-12-06 04:10:29.901005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.610 [2024-12-06 04:10:29.901044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.610 [2024-12-06 04:10:29.901052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:42.610 [2024-12-06 04:10:29.901061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.610 [2024-12-06 04:10:29.901071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.610 [2024-12-06 04:10:29.901119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.610 [2024-12-06 04:10:29.901129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:42.610 [2024-12-06 04:10:29.901138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.610 [2024-12-06 04:10:29.901145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.610 [2024-12-06 04:10:29.901288] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.845 ms, result 0 00:19:42.610 true 00:19:42.610 04:10:29 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75102 00:19:42.610 04:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75102 ']' 00:19:42.610 04:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75102 00:19:42.610 04:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:19:42.610 04:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.610 04:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75102 00:19:42.610 04:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.610 04:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.610 04:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75102' 00:19:42.610 killing process with pid 75102 00:19:42.610 04:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75102 00:19:42.610 04:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75102 00:19:54.827 04:10:40 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:54.827 04:10:40 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:54.827 04:10:40 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:54.827 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.827 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:54.828 04:10:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:54.828 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:54.828 fio-3.35 00:19:54.828 Starting 1 thread 00:19:57.366 00:19:57.366 test: (groupid=0, jobs=1): err= 0: pid=75281: Fri Dec 6 04:10:44 2024 00:19:57.366 read: IOPS=1270, BW=84.4MiB/s (88.5MB/s)(255MiB/3016msec) 00:19:57.366 slat (nsec): min=3048, max=25189, avg=4556.47, stdev=2135.01 00:19:57.366 clat (usec): min=233, max=1129, avg=354.77, stdev=81.71 00:19:57.366 lat (usec): min=238, max=1135, avg=359.33, stdev=82.26 00:19:57.366 clat percentiles (usec): 00:19:57.366 | 1.00th=[ 255], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 314], 00:19:57.366 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 326], 60.00th=[ 330], 00:19:57.366 | 70.00th=[ 343], 80.00th=[ 388], 90.00th=[ 457], 95.00th=[ 506], 00:19:57.366 | 99.00th=[ 717], 99.50th=[ 766], 99.90th=[ 1057], 99.95th=[ 1123], 00:19:57.366 | 99.99th=[ 1123] 00:19:57.366 write: IOPS=1279, BW=85.0MiB/s (89.1MB/s)(256MiB/3013msec); 0 zone resets 00:19:57.366 slat (nsec): min=13944, max=80116, avg=19464.21, stdev=4100.74 00:19:57.366 clat (usec): min=270, max=1481, avg=392.73, stdev=104.09 00:19:57.366 lat (usec): min=290, max=1501, avg=412.19, stdev=104.55 00:19:57.366 clat percentiles (usec): 00:19:57.366 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 343], 00:19:57.366 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 363], 00:19:57.366 | 70.00th=[ 383], 80.00th=[ 420], 90.00th=[ 498], 95.00th=[ 594], 00:19:57.366 | 99.00th=[ 824], 99.50th=[ 947], 99.90th=[ 1287], 99.95th=[ 1352], 00:19:57.366 | 99.99th=[ 1483] 00:19:57.366 bw ( KiB/s): min=73032, max=93840, per=100.00%, avg=87062.67, stdev=7251.45, samples=6 00:19:57.366 iops : min= 1074, max= 1380, avg=1280.33, stdev=106.64, samples=6 00:19:57.366 lat (usec) : 250=0.09%, 500=92.37%, 750=6.33%, 1000=0.92% 00:19:57.366 lat (msec) : 2=0.29% 00:19:57.366 cpu : usr=99.24%, sys=0.10%, ctx=37, majf=0, minf=1169 00:19:57.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.366 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:57.366 00:19:57.366 Run status group 0 (all jobs): 00:19:57.366 READ: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=255MiB (267MB), run=3016-3016msec 00:19:57.366 WRITE: bw=85.0MiB/s (89.1MB/s), 85.0MiB/s-85.0MiB/s (89.1MB/s-89.1MB/s), io=256MiB (269MB), run=3013-3013msec 00:19:58.309 ----------------------------------------------------- 00:19:58.309 Suppressions used: 00:19:58.309 count bytes template 00:19:58.309 1 5 /usr/src/fio/parse.c 00:19:58.309 1 8 libtcmalloc_minimal.so 00:19:58.309 1 904 libcrypto.so 00:19:58.309 ----------------------------------------------------- 00:19:58.309 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.309 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:58.569 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:58.569 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:58.569 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:58.569 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:58.569 04:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:58.569 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:58.569 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:58.569 fio-3.35 00:19:58.569 Starting 2 threads 00:20:25.188 00:20:25.188 first_half: (groupid=0, jobs=1): err= 0: pid=75373: Fri Dec 6 04:11:08 2024 00:20:25.188 read: IOPS=3037, BW=11.9MiB/s (12.4MB/s)(255MiB/21480msec) 00:20:25.188 slat (nsec): min=3095, max=48629, avg=3858.57, stdev=834.33 00:20:25.188 clat (usec): min=589, max=281947, avg=32437.36, stdev=16255.15 00:20:25.188 lat (usec): min=593, max=281952, avg=32441.22, stdev=16255.19 00:20:25.188 clat percentiles (msec): 00:20:25.188 | 1.00th=[ 7], 5.00th=[ 25], 10.00th=[ 28], 20.00th=[ 30], 00:20:25.188 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:20:25.188 | 70.00th=[ 31], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 40], 00:20:25.188 | 99.00th=[ 123], 99.50th=[ 138], 99.90th=[ 184], 99.95th=[ 243], 00:20:25.188 | 99.99th=[ 275] 00:20:25.188 write: IOPS=3620, BW=14.1MiB/s (14.8MB/s)(256MiB/18101msec); 0 zone resets 00:20:25.188 slat (usec): min=3, max=718, avg= 5.61, stdev= 4.02 00:20:25.188 clat (usec): min=326, max=75701, avg=9566.10, stdev=14707.59 00:20:25.188 lat (usec): min=336, max=75706, avg=9571.71, stdev=14707.64 00:20:25.188 clat percentiles (usec): 00:20:25.188 | 1.00th=[ 586], 5.00th=[ 734], 10.00th=[ 832], 20.00th=[ 1090], 00:20:25.188 | 30.00th=[ 2769], 40.00th=[ 4047], 50.00th=[ 4883], 60.00th=[ 5473], 00:20:25.188 | 70.00th=[ 6128], 80.00th=[10683], 90.00th=[28443], 95.00th=[52691], 00:20:25.188 | 99.00th=[63177], 99.50th=[67634], 99.90th=[72877], 99.95th=[73925], 00:20:25.188 | 99.99th=[74974] 00:20:25.188 bw ( KiB/s): min= 960, max=42816, per=82.28%, avg=23831.27, stdev=13056.83, samples=22 00:20:25.188 iops : min= 240, max=10704, avg=5957.82, stdev=3264.21, samples=22 00:20:25.188 lat (usec) : 500=0.03%, 750=2.88%, 1000=6.05% 00:20:25.188 lat (msec) : 2=4.35%, 4=6.89%, 10=20.70%, 20=5.69%, 50=47.98% 00:20:25.188 lat (msec) : 100=4.56%, 250=0.85%, 500=0.02% 00:20:25.188 cpu : usr=99.34%, sys=0.07%, ctx=63, majf=0, minf=5599 00:20:25.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:25.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.188 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:25.188 issued rwts: total=65246,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:25.188 second_half: (groupid=0, jobs=1): err= 0: pid=75374: Fri Dec 6 04:11:08 2024 00:20:25.188 read: IOPS=3052, BW=11.9MiB/s (12.5MB/s)(255MiB/21352msec) 00:20:25.188 slat (nsec): min=3090, max=37860, avg=3948.76, stdev=919.99 00:20:25.188 clat (usec): min=512, max=287295, avg=32968.81, stdev=14648.17 00:20:25.188 lat (usec): min=517, max=287299, avg=32972.76, stdev=14648.20 00:20:25.188 clat percentiles (msec): 00:20:25.188 | 1.00th=[ 5], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:20:25.188 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:20:25.188 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 44], 00:20:25.188 | 99.00th=[ 108], 99.50th=[ 128], 99.90th=[ 163], 99.95th=[ 188], 00:20:25.188 | 99.99th=[ 279] 00:20:25.188 write: IOPS=4007, BW=15.7MiB/s (16.4MB/s)(256MiB/16353msec); 0 zone resets 00:20:25.188 slat (usec): min=3, max=313, avg= 5.70, stdev= 2.73 00:20:25.188 clat (usec): min=346, max=76092, avg=8895.87, stdev=14632.46 00:20:25.188 lat (usec): min=352, max=76096, avg=8901.56, stdev=14632.47 00:20:25.188 clat percentiles (usec): 00:20:25.188 | 1.00th=[ 594], 5.00th=[ 742], 10.00th=[ 840], 20.00th=[ 979], 00:20:25.188 | 30.00th=[ 1287], 40.00th=[ 2900], 50.00th=[ 3884], 60.00th=[ 4752], 00:20:25.188 | 70.00th=[ 5932], 80.00th=[10552], 90.00th=[22414], 95.00th=[52691], 00:20:25.188 | 99.00th=[61604], 99.50th=[67634], 99.90th=[72877], 99.95th=[73925], 00:20:25.188 | 99.99th=[74974] 00:20:25.188 bw ( KiB/s): min= 168, max=50000, per=86.19%, avg=24966.10, stdev=15745.81, samples=21 00:20:25.188 iops : min= 42, max=12500, avg=6241.52, stdev=3936.45, samples=21 00:20:25.188 lat (usec) : 500=0.06%, 750=2.58%, 1000=8.00% 00:20:25.188 lat (msec) : 2=6.62%, 4=8.85%, 10=13.83%, 20=6.23%, 50=48.10% 00:20:25.188 lat (msec) : 100=5.09%, 250=0.63%, 500=0.01% 00:20:25.188 cpu : usr=99.44%, sys=0.10%, ctx=34, majf=0, minf=5530 00:20:25.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:25.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.188 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:25.188 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:25.188 00:20:25.188 Run status group 0 (all jobs): 00:20:25.188 READ: bw=23.7MiB/s (24.9MB/s), 11.9MiB/s-11.9MiB/s (12.4MB/s-12.5MB/s), io=509MiB (534MB), run=21352-21480msec 00:20:25.188 WRITE: bw=28.3MiB/s (29.7MB/s), 14.1MiB/s-15.7MiB/s (14.8MB/s-16.4MB/s), io=512MiB (537MB), run=16353-18101msec 00:20:25.188 ----------------------------------------------------- 00:20:25.188 Suppressions used: 00:20:25.188 count bytes template 00:20:25.188 2 10 /usr/src/fio/parse.c 00:20:25.188 2 192 /usr/src/fio/iolog.c 00:20:25.188 1 8 libtcmalloc_minimal.so 00:20:25.188 1 904 libcrypto.so 00:20:25.188 ----------------------------------------------------- 00:20:25.188 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:25.188 04:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:25.188 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:25.188 fio-3.35 00:20:25.188 Starting 1 thread 00:20:37.373 00:20:37.373 test: (groupid=0, jobs=1): err= 0: pid=75659: Fri Dec 6 04:11:24 2024 00:20:37.373 read: IOPS=8152, BW=31.8MiB/s (33.4MB/s)(255MiB/7998msec) 00:20:37.373 slat (nsec): min=3097, max=41972, avg=3691.46, stdev=726.86 00:20:37.373 clat (usec): min=530, max=31467, avg=15693.67, stdev=1722.30 00:20:37.373 lat (usec): min=534, max=31470, avg=15697.36, stdev=1722.33 00:20:37.373 clat percentiles (usec): 00:20:37.373 | 1.00th=[13698], 5.00th=[13960], 10.00th=[14222], 20.00th=[14877], 00:20:37.373 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:20:37.373 | 70.00th=[15795], 80.00th=[16057], 90.00th=[17695], 95.00th=[19530], 00:20:37.373 | 99.00th=[22676], 99.50th=[23462], 99.90th=[26608], 99.95th=[27657], 00:20:37.373 | 99.99th=[30802] 00:20:37.373 write: IOPS=16.0k, BW=62.5MiB/s (65.6MB/s)(256MiB/4093msec); 0 zone resets 00:20:37.373 slat (usec): min=4, max=450, avg= 6.50, stdev= 3.09 00:20:37.373 clat (usec): min=506, max=54002, avg=7949.32, stdev=10202.64 00:20:37.373 lat (usec): min=512, max=54007, avg=7955.82, stdev=10202.59 00:20:37.373 clat percentiles (usec): 00:20:37.373 | 1.00th=[ 668], 5.00th=[ 775], 10.00th=[ 873], 20.00th=[ 1004], 00:20:37.373 | 30.00th=[ 1123], 40.00th=[ 1516], 50.00th=[ 4883], 60.00th=[ 5669], 00:20:37.373 | 70.00th=[ 6915], 80.00th=[ 8455], 90.00th=[29492], 95.00th=[31065], 00:20:37.373 | 99.00th=[34866], 99.50th=[35914], 99.90th=[39584], 99.95th=[45876], 00:20:37.373 | 99.99th=[52691] 00:20:37.373 bw ( KiB/s): min= 8392, max=85728, per=90.96%, avg=58254.22, stdev=22431.26, samples=9 00:20:37.373 iops : min= 2098, max=21432, avg=14563.56, stdev=5607.82, samples=9 00:20:37.373 lat (usec) : 750=2.09%, 1000=7.94% 00:20:37.373 lat (msec) : 2=10.60%, 4=0.91%, 10=20.23%, 20=48.26%, 50=9.98% 00:20:37.373 lat (msec) : 100=0.01% 00:20:37.373 cpu : usr=99.07%, sys=0.22%, ctx=17, majf=0, minf=5565 00:20:37.373 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:37.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.373 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:37.373 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:37.373 00:20:37.373 Run status group 0 (all jobs): 00:20:37.373 READ: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=255MiB (267MB), run=7998-7998msec 00:20:37.373 WRITE: bw=62.5MiB/s (65.6MB/s), 62.5MiB/s-62.5MiB/s (65.6MB/s-65.6MB/s), io=256MiB (268MB), run=4093-4093msec 00:20:38.305 ----------------------------------------------------- 00:20:38.305 Suppressions used: 00:20:38.305 count bytes template 00:20:38.305 1 5 /usr/src/fio/parse.c 00:20:38.305 2 192 /usr/src/fio/iolog.c 00:20:38.305 1 8 libtcmalloc_minimal.so 00:20:38.305 1 904 libcrypto.so 00:20:38.305 ----------------------------------------------------- 00:20:38.305 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:38.305 Remove shared memory files 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57148 /dev/shm/spdk_tgt_trace.pid74020 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:38.305 ************************************ 00:20:38.305 END TEST ftl_fio_basic 00:20:38.305 ************************************ 00:20:38.305 00:20:38.305 real 1m3.432s 00:20:38.305 user 2m22.240s 00:20:38.305 sys 0m2.542s 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.305 04:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:38.562 04:11:25 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:38.562 04:11:25 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:38.562 04:11:25 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.562 04:11:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:38.562 ************************************ 00:20:38.562 START TEST ftl_bdevperf 00:20:38.562 ************************************ 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:38.562 * Looking for test storage... 00:20:38.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:38.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.562 --rc genhtml_branch_coverage=1 00:20:38.562 --rc genhtml_function_coverage=1 00:20:38.562 --rc genhtml_legend=1 00:20:38.562 --rc geninfo_all_blocks=1 00:20:38.562 --rc geninfo_unexecuted_blocks=1 00:20:38.562 00:20:38.562 ' 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:38.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.562 --rc genhtml_branch_coverage=1 00:20:38.562 --rc genhtml_function_coverage=1 00:20:38.562 --rc genhtml_legend=1 00:20:38.562 --rc geninfo_all_blocks=1 00:20:38.562 --rc geninfo_unexecuted_blocks=1 00:20:38.562 00:20:38.562 ' 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:38.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.562 --rc genhtml_branch_coverage=1 00:20:38.562 --rc genhtml_function_coverage=1 00:20:38.562 --rc genhtml_legend=1 00:20:38.562 --rc geninfo_all_blocks=1 00:20:38.562 --rc geninfo_unexecuted_blocks=1 00:20:38.562 00:20:38.562 ' 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:38.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.562 --rc genhtml_branch_coverage=1 00:20:38.562 --rc genhtml_function_coverage=1 00:20:38.562 --rc genhtml_legend=1 00:20:38.562 --rc geninfo_all_blocks=1 00:20:38.562 --rc geninfo_unexecuted_blocks=1 00:20:38.562 00:20:38.562 ' 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:38.562 04:11:25 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:38.562 04:11:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:38.563 04:11:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:38.563 04:11:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:38.563 04:11:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75886 00:20:38.563 04:11:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:38.563 04:11:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:38.563 04:11:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75886 00:20:38.563 04:11:26 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75886 ']' 00:20:38.563 04:11:26 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.563 04:11:26 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.563 04:11:26 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.563 04:11:26 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.563 04:11:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:38.563 [2024-12-06 04:11:26.073544] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:20:38.563 [2024-12-06 04:11:26.073845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75886 ] 00:20:38.820 [2024-12-06 04:11:26.238845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.820 [2024-12-06 04:11:26.339597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.386 04:11:26 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.386 04:11:26 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:39.644 04:11:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:39.644 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:20:39.644 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:39.644 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:20:39.644 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:20:39.644 04:11:26 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:39.902 04:11:27 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:39.902 04:11:27 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:20:39.902 04:11:27 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:39.902 04:11:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:39.902 04:11:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:39.902 04:11:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:39.902 04:11:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:39.902 04:11:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:39.902 04:11:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:39.902 { 00:20:39.902 "name": "nvme0n1", 00:20:39.902 "aliases": [ 00:20:39.902 "98001858-b4f8-460e-a8f1-50aac723b658" 00:20:39.902 ], 00:20:39.902 "product_name": "NVMe disk", 00:20:39.902 "block_size": 4096, 00:20:39.902 "num_blocks": 1310720, 00:20:39.902 "uuid": "98001858-b4f8-460e-a8f1-50aac723b658", 00:20:39.902 "numa_id": -1, 00:20:39.902 "assigned_rate_limits": { 00:20:39.902 "rw_ios_per_sec": 0, 00:20:39.902 "rw_mbytes_per_sec": 0, 00:20:39.902 "r_mbytes_per_sec": 0, 00:20:39.902 "w_mbytes_per_sec": 0 00:20:39.902 }, 00:20:39.902 "claimed": true, 00:20:39.902 "claim_type": "read_many_write_one", 00:20:39.902 "zoned": false, 00:20:39.902 "supported_io_types": { 00:20:39.902 "read": true, 00:20:39.902 "write": true, 00:20:39.902 "unmap": true, 00:20:39.902 "flush": true, 00:20:39.902 "reset": true, 00:20:39.902 "nvme_admin": true, 00:20:39.902 "nvme_io": true, 00:20:39.902 "nvme_io_md": false, 00:20:39.902 "write_zeroes": true, 00:20:39.902 "zcopy": false, 00:20:39.902 "get_zone_info": false, 00:20:39.902 "zone_management": false, 00:20:39.902 "zone_append": false, 00:20:39.902 "compare": true, 00:20:39.902 "compare_and_write": false, 00:20:39.902 "abort": true, 00:20:39.902 "seek_hole": false, 00:20:39.902 "seek_data": false, 00:20:39.902 "copy": true, 00:20:39.902 "nvme_iov_md": false 00:20:39.902 }, 00:20:39.902 "driver_specific": { 00:20:39.902 "nvme": [ 00:20:39.902 { 00:20:39.902 "pci_address": "0000:00:11.0", 00:20:39.902 "trid": { 00:20:39.902 "trtype": "PCIe", 00:20:39.902 "traddr": "0000:00:11.0" 00:20:39.902 }, 00:20:39.902 "ctrlr_data": { 00:20:39.902 "cntlid": 0, 00:20:39.902 "vendor_id": "0x1b36", 00:20:39.902 "model_number": "QEMU NVMe Ctrl", 00:20:39.902 "serial_number": "12341", 00:20:39.902 "firmware_revision": "8.0.0", 00:20:39.902 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:39.902 "oacs": { 00:20:39.902 "security": 0, 00:20:39.902 "format": 1, 00:20:39.902 "firmware": 0, 00:20:39.902 "ns_manage": 1 00:20:39.902 }, 00:20:39.902 "multi_ctrlr": false, 00:20:39.902 "ana_reporting": false 00:20:39.902 }, 00:20:39.902 "vs": { 00:20:39.902 "nvme_version": "1.4" 00:20:39.902 }, 00:20:39.902 "ns_data": { 00:20:39.902 "id": 1, 00:20:39.902 "can_share": false 00:20:39.902 } 00:20:39.902 } 00:20:39.902 ], 00:20:39.902 "mp_policy": "active_passive" 00:20:39.902 } 00:20:39.902 } 00:20:39.902 ]' 00:20:39.902 04:11:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:40.160 04:11:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:40.160 04:11:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:40.160 04:11:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:40.160 04:11:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:40.160 04:11:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:20:40.160 04:11:27 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:20:40.160 04:11:27 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:40.160 04:11:27 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:20:40.160 04:11:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:40.160 04:11:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:40.418 04:11:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=8793a231-f2e0-4266-b363-99d039d928e4 00:20:40.418 04:11:27 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:20:40.418 04:11:27 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8793a231-f2e0-4266-b363-99d039d928e4 00:20:40.759 04:11:28 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=90fad635-52d3-4dbb-b3d3-b102793492c8 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 90fad635-52d3-4dbb-b3d3-b102793492c8 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=ad267bc8-7895-490c-a341-4a1f0bd95033 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ad267bc8-7895-490c-a341-4a1f0bd95033 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=ad267bc8-7895-490c-a341-4a1f0bd95033 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size ad267bc8-7895-490c-a341-4a1f0bd95033 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ad267bc8-7895-490c-a341-4a1f0bd95033 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:41.041 04:11:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad267bc8-7895-490c-a341-4a1f0bd95033 00:20:41.300 04:11:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:41.300 { 00:20:41.300 "name": "ad267bc8-7895-490c-a341-4a1f0bd95033", 00:20:41.300 "aliases": [ 00:20:41.300 "lvs/nvme0n1p0" 00:20:41.300 ], 00:20:41.300 "product_name": "Logical Volume", 00:20:41.300 "block_size": 4096, 00:20:41.300 "num_blocks": 26476544, 00:20:41.300 "uuid": "ad267bc8-7895-490c-a341-4a1f0bd95033", 00:20:41.300 "assigned_rate_limits": { 00:20:41.300 "rw_ios_per_sec": 0, 00:20:41.300 "rw_mbytes_per_sec": 0, 00:20:41.300 "r_mbytes_per_sec": 0, 00:20:41.300 "w_mbytes_per_sec": 0 00:20:41.300 }, 00:20:41.300 "claimed": false, 00:20:41.300 "zoned": false, 00:20:41.300 "supported_io_types": { 00:20:41.300 "read": true, 00:20:41.300 "write": true, 00:20:41.300 "unmap": true, 00:20:41.300 "flush": false, 00:20:41.300 "reset": true, 00:20:41.300 "nvme_admin": false, 00:20:41.300 "nvme_io": false, 00:20:41.300 "nvme_io_md": false, 00:20:41.300 "write_zeroes": true, 00:20:41.300 "zcopy": false, 00:20:41.300 "get_zone_info": false, 00:20:41.300 "zone_management": false, 00:20:41.300 "zone_append": false, 00:20:41.300 "compare": false, 00:20:41.300 "compare_and_write": false, 00:20:41.300 "abort": false, 00:20:41.300 "seek_hole": true, 00:20:41.300 "seek_data": true, 00:20:41.300 "copy": false, 00:20:41.300 "nvme_iov_md": false 00:20:41.300 }, 00:20:41.300 "driver_specific": { 00:20:41.300 "lvol": { 00:20:41.300 "lvol_store_uuid": "90fad635-52d3-4dbb-b3d3-b102793492c8", 00:20:41.300 "base_bdev": "nvme0n1", 00:20:41.300 "thin_provision": true, 00:20:41.300 "num_allocated_clusters": 0, 00:20:41.300 "snapshot": false, 00:20:41.300 "clone": false, 00:20:41.300 "esnap_clone": false 00:20:41.300 } 00:20:41.300 } 00:20:41.300 } 00:20:41.300 ]' 00:20:41.300 04:11:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:41.300 04:11:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:41.300 04:11:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:41.300 04:11:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:41.300 04:11:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:41.300 04:11:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:41.300 04:11:28 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:20:41.300 04:11:28 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:20:41.300 04:11:28 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:41.558 04:11:29 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:41.558 04:11:29 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:41.558 04:11:29 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size ad267bc8-7895-490c-a341-4a1f0bd95033 00:20:41.558 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ad267bc8-7895-490c-a341-4a1f0bd95033 00:20:41.558 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:41.558 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:41.558 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:41.558 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad267bc8-7895-490c-a341-4a1f0bd95033 00:20:41.817 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:41.817 { 00:20:41.817 "name": "ad267bc8-7895-490c-a341-4a1f0bd95033", 00:20:41.817 "aliases": [ 00:20:41.817 "lvs/nvme0n1p0" 00:20:41.817 ], 00:20:41.817 "product_name": "Logical Volume", 00:20:41.817 "block_size": 4096, 00:20:41.817 "num_blocks": 26476544, 00:20:41.817 "uuid": "ad267bc8-7895-490c-a341-4a1f0bd95033", 00:20:41.817 "assigned_rate_limits": { 00:20:41.817 "rw_ios_per_sec": 0, 00:20:41.817 "rw_mbytes_per_sec": 0, 00:20:41.817 "r_mbytes_per_sec": 0, 00:20:41.817 "w_mbytes_per_sec": 0 00:20:41.817 }, 00:20:41.817 "claimed": false, 00:20:41.817 "zoned": false, 00:20:41.817 "supported_io_types": { 00:20:41.817 "read": true, 00:20:41.817 "write": true, 00:20:41.817 "unmap": true, 00:20:41.817 "flush": false, 00:20:41.817 "reset": true, 00:20:41.817 "nvme_admin": false, 00:20:41.817 "nvme_io": false, 00:20:41.817 "nvme_io_md": false, 00:20:41.817 "write_zeroes": true, 00:20:41.817 "zcopy": false, 00:20:41.817 "get_zone_info": false, 00:20:41.817 "zone_management": false, 00:20:41.817 "zone_append": false, 00:20:41.817 "compare": false, 00:20:41.817 "compare_and_write": false, 00:20:41.817 "abort": false, 00:20:41.817 "seek_hole": true, 00:20:41.817 "seek_data": true, 00:20:41.817 "copy": false, 00:20:41.817 "nvme_iov_md": false 00:20:41.817 }, 00:20:41.817 "driver_specific": { 00:20:41.817 "lvol": { 00:20:41.817 "lvol_store_uuid": "90fad635-52d3-4dbb-b3d3-b102793492c8", 00:20:41.817 "base_bdev": "nvme0n1", 00:20:41.817 "thin_provision": true, 00:20:41.817 "num_allocated_clusters": 0, 00:20:41.817 "snapshot": false, 00:20:41.817 "clone": false, 00:20:41.817 "esnap_clone": false 00:20:41.817 } 00:20:41.817 } 00:20:41.817 } 00:20:41.817 ]' 00:20:41.817 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:41.817 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:41.817 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:41.817 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:41.817 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:41.818 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:41.818 04:11:29 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:20:41.818 04:11:29 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:42.075 04:11:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:20:42.075 04:11:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size ad267bc8-7895-490c-a341-4a1f0bd95033 00:20:42.075 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ad267bc8-7895-490c-a341-4a1f0bd95033 00:20:42.075 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:42.075 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:42.075 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:42.075 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad267bc8-7895-490c-a341-4a1f0bd95033 00:20:42.334 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:42.334 { 00:20:42.334 "name": "ad267bc8-7895-490c-a341-4a1f0bd95033", 00:20:42.334 "aliases": [ 00:20:42.334 "lvs/nvme0n1p0" 00:20:42.334 ], 00:20:42.334 "product_name": "Logical Volume", 00:20:42.334 "block_size": 4096, 00:20:42.334 "num_blocks": 26476544, 00:20:42.334 "uuid": "ad267bc8-7895-490c-a341-4a1f0bd95033", 00:20:42.334 "assigned_rate_limits": { 00:20:42.334 "rw_ios_per_sec": 0, 00:20:42.334 "rw_mbytes_per_sec": 0, 00:20:42.334 "r_mbytes_per_sec": 0, 00:20:42.334 "w_mbytes_per_sec": 0 00:20:42.334 }, 00:20:42.334 "claimed": false, 00:20:42.334 "zoned": false, 00:20:42.334 "supported_io_types": { 00:20:42.334 "read": true, 00:20:42.334 "write": true, 00:20:42.334 "unmap": true, 00:20:42.334 "flush": false, 00:20:42.334 "reset": true, 00:20:42.334 "nvme_admin": false, 00:20:42.334 "nvme_io": false, 00:20:42.334 "nvme_io_md": false, 00:20:42.334 "write_zeroes": true, 00:20:42.334 "zcopy": false, 00:20:42.334 "get_zone_info": false, 00:20:42.334 "zone_management": false, 00:20:42.334 "zone_append": false, 00:20:42.334 "compare": false, 00:20:42.334 "compare_and_write": false, 00:20:42.334 "abort": false, 00:20:42.334 "seek_hole": true, 00:20:42.334 "seek_data": true, 00:20:42.334 "copy": false, 00:20:42.334 "nvme_iov_md": false 00:20:42.334 }, 00:20:42.334 "driver_specific": { 00:20:42.334 "lvol": { 00:20:42.334 "lvol_store_uuid": "90fad635-52d3-4dbb-b3d3-b102793492c8", 00:20:42.334 "base_bdev": "nvme0n1", 00:20:42.334 "thin_provision": true, 00:20:42.334 "num_allocated_clusters": 0, 00:20:42.334 "snapshot": false, 00:20:42.334 "clone": false, 00:20:42.334 "esnap_clone": false 00:20:42.334 } 00:20:42.334 } 00:20:42.334 } 00:20:42.334 ]' 00:20:42.334 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:42.334 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:42.334 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:42.334 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:42.334 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:42.334 04:11:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:42.334 04:11:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:20:42.334 04:11:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ad267bc8-7895-490c-a341-4a1f0bd95033 -c nvc0n1p0 --l2p_dram_limit 20 00:20:42.592 [2024-12-06 04:11:29.962290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.592 [2024-12-06 04:11:29.962347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:42.592 [2024-12-06 04:11:29.962360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:42.592 [2024-12-06 04:11:29.962368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 04:11:29.962420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.592 [2024-12-06 04:11:29.962430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.592 [2024-12-06 04:11:29.962437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:42.592 [2024-12-06 04:11:29.962445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 04:11:29.962460] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:42.592 [2024-12-06 04:11:29.963130] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:42.592 [2024-12-06 04:11:29.963194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.592 [2024-12-06 04:11:29.963204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.592 [2024-12-06 04:11:29.963211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:20:42.592 [2024-12-06 04:11:29.963219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 04:11:29.963320] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e03f0122-0289-413c-8253-66d39dfc231f 00:20:42.592 [2024-12-06 04:11:29.964369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.592 [2024-12-06 04:11:29.964398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:42.592 [2024-12-06 04:11:29.964411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:42.592 [2024-12-06 04:11:29.964417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 04:11:29.969773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.592 [2024-12-06 04:11:29.969879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.592 [2024-12-06 04:11:29.969894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.324 ms 00:20:42.592 [2024-12-06 04:11:29.969903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 04:11:29.969975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.592 [2024-12-06 04:11:29.969983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.592 [2024-12-06 04:11:29.969993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:42.592 [2024-12-06 04:11:29.969999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 04:11:29.970060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.592 [2024-12-06 04:11:29.970068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:42.592 [2024-12-06 04:11:29.970076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:42.592 [2024-12-06 04:11:29.970082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 04:11:29.970102] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.592 [2024-12-06 04:11:29.973171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.592 [2024-12-06 04:11:29.973262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.592 [2024-12-06 04:11:29.973274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.078 ms 00:20:42.592 [2024-12-06 04:11:29.973285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 04:11:29.973312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.592 [2024-12-06 04:11:29.973320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:42.592 [2024-12-06 04:11:29.973326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:42.592 [2024-12-06 04:11:29.973334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 04:11:29.973354] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:42.592 [2024-12-06 04:11:29.973475] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:42.592 [2024-12-06 04:11:29.973484] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:42.593 [2024-12-06 04:11:29.973494] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:42.593 [2024-12-06 04:11:29.973501] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:42.593 [2024-12-06 04:11:29.973511] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:42.593 [2024-12-06 04:11:29.973517] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:42.593 [2024-12-06 04:11:29.973524] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:42.593 [2024-12-06 04:11:29.973530] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:42.593 [2024-12-06 04:11:29.973538] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:42.593 [2024-12-06 04:11:29.973545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.593 [2024-12-06 04:11:29.973552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:42.593 [2024-12-06 04:11:29.973558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:42.593 [2024-12-06 04:11:29.973565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.593 [2024-12-06 04:11:29.973630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.593 [2024-12-06 04:11:29.973638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:42.593 [2024-12-06 04:11:29.973644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:42.593 [2024-12-06 04:11:29.973652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.593 [2024-12-06 04:11:29.973740] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:42.593 [2024-12-06 04:11:29.973752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:42.593 [2024-12-06 04:11:29.973758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.593 [2024-12-06 04:11:29.973766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.593 [2024-12-06 04:11:29.973773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:42.593 [2024-12-06 04:11:29.973779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:42.593 [2024-12-06 04:11:29.973785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:42.593 [2024-12-06 04:11:29.973792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:42.593 [2024-12-06 04:11:29.973797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:42.593 [2024-12-06 04:11:29.973804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.593 [2024-12-06 04:11:29.973809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:42.593 [2024-12-06 04:11:29.973822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:42.593 [2024-12-06 04:11:29.973828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.593 [2024-12-06 04:11:29.973835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:42.593 [2024-12-06 04:11:29.973841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:42.593 [2024-12-06 04:11:29.973849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.593 [2024-12-06 04:11:29.973854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:42.593 [2024-12-06 04:11:29.973861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:42.593 [2024-12-06 04:11:29.973866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.593 [2024-12-06 04:11:29.973872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:42.593 [2024-12-06 04:11:29.973878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:42.593 [2024-12-06 04:11:29.973884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.593 [2024-12-06 04:11:29.973889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:42.593 [2024-12-06 04:11:29.973896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:42.593 [2024-12-06 04:11:29.973901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.593 [2024-12-06 04:11:29.973908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:42.593 [2024-12-06 04:11:29.973913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:42.593 [2024-12-06 04:11:29.973920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.593 [2024-12-06 04:11:29.973925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:42.593 [2024-12-06 04:11:29.973931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:42.593 [2024-12-06 04:11:29.973936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.593 [2024-12-06 04:11:29.973944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:42.593 [2024-12-06 04:11:29.973949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:42.593 [2024-12-06 04:11:29.973956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.593 [2024-12-06 04:11:29.973961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:42.593 [2024-12-06 04:11:29.973968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:42.593 [2024-12-06 04:11:29.973979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.593 [2024-12-06 04:11:29.973986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:42.593 [2024-12-06 04:11:29.973992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:42.593 [2024-12-06 04:11:29.973998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.593 [2024-12-06 04:11:29.974004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:42.593 [2024-12-06 04:11:29.974010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:42.593 [2024-12-06 04:11:29.974015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.593 [2024-12-06 04:11:29.974021] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:42.593 [2024-12-06 04:11:29.974028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:42.593 [2024-12-06 04:11:29.974036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.593 [2024-12-06 04:11:29.974041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.593 [2024-12-06 04:11:29.974050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:42.593 [2024-12-06 04:11:29.974056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:42.593 [2024-12-06 04:11:29.974062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:42.593 [2024-12-06 04:11:29.974067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:42.593 [2024-12-06 04:11:29.974074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:42.593 [2024-12-06 04:11:29.974079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:42.593 [2024-12-06 04:11:29.974087] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:42.593 [2024-12-06 04:11:29.974094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.593 [2024-12-06 04:11:29.974103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:42.593 [2024-12-06 04:11:29.974109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:42.593 [2024-12-06 04:11:29.974116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:42.593 [2024-12-06 04:11:29.974121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:42.593 [2024-12-06 04:11:29.974128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:42.593 [2024-12-06 04:11:29.974134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:42.593 [2024-12-06 04:11:29.974141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:42.593 [2024-12-06 04:11:29.974147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:42.593 [2024-12-06 04:11:29.974156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:42.593 [2024-12-06 04:11:29.974162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:42.593 [2024-12-06 04:11:29.974169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:42.593 [2024-12-06 04:11:29.974175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:42.593 [2024-12-06 04:11:29.974182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:42.593 [2024-12-06 04:11:29.974190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:42.593 [2024-12-06 04:11:29.974197] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:42.593 [2024-12-06 04:11:29.974204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.593 [2024-12-06 04:11:29.974213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:42.593 [2024-12-06 04:11:29.974219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:42.593 [2024-12-06 04:11:29.974226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:42.593 [2024-12-06 04:11:29.974232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:42.593 [2024-12-06 04:11:29.974240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.593 [2024-12-06 04:11:29.974245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:42.593 [2024-12-06 04:11:29.974252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:20:42.593 [2024-12-06 04:11:29.974258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.593 [2024-12-06 04:11:29.974299] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:42.593 [2024-12-06 04:11:29.974307] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:45.124 [2024-12-06 04:11:32.087918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.088160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:45.124 [2024-12-06 04:11:32.088186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2113.596 ms 00:20:45.124 [2024-12-06 04:11:32.088195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.114921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.115742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.124 [2024-12-06 04:11:32.115771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.507 ms 00:20:45.124 [2024-12-06 04:11:32.115781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.115931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.115943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:45.124 [2024-12-06 04:11:32.115957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:45.124 [2024-12-06 04:11:32.115967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.163350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.163563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:45.124 [2024-12-06 04:11:32.163586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.348 ms 00:20:45.124 [2024-12-06 04:11:32.163596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.163646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.163656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:45.124 [2024-12-06 04:11:32.163666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:45.124 [2024-12-06 04:11:32.163675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.164056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.164079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:45.124 [2024-12-06 04:11:32.164090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:20:45.124 [2024-12-06 04:11:32.164097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.164223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.164236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:45.124 [2024-12-06 04:11:32.164247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:20:45.124 [2024-12-06 04:11:32.164254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.177270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.177309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:45.124 [2024-12-06 04:11:32.177322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.996 ms 00:20:45.124 [2024-12-06 04:11:32.177337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.188875] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:20:45.124 [2024-12-06 04:11:32.194084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.194244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:45.124 [2024-12-06 04:11:32.194262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.649 ms 00:20:45.124 [2024-12-06 04:11:32.194271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.254634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.254700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:45.124 [2024-12-06 04:11:32.254713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.325 ms 00:20:45.124 [2024-12-06 04:11:32.254738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.254930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.254946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:45.124 [2024-12-06 04:11:32.254954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:20:45.124 [2024-12-06 04:11:32.254966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.279027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.279092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:45.124 [2024-12-06 04:11:32.279105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.009 ms 00:20:45.124 [2024-12-06 04:11:32.279115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.302460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.302531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:45.124 [2024-12-06 04:11:32.302544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.294 ms 00:20:45.124 [2024-12-06 04:11:32.302553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.303148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.303171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:45.124 [2024-12-06 04:11:32.303180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:20:45.124 [2024-12-06 04:11:32.303189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.373236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.373425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:45.124 [2024-12-06 04:11:32.373444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.003 ms 00:20:45.124 [2024-12-06 04:11:32.373453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.398290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.398351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:45.124 [2024-12-06 04:11:32.398366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.760 ms 00:20:45.124 [2024-12-06 04:11:32.398376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.422552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.422599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:45.124 [2024-12-06 04:11:32.422611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.131 ms 00:20:45.124 [2024-12-06 04:11:32.422621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.446369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.446430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:45.124 [2024-12-06 04:11:32.446442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.697 ms 00:20:45.124 [2024-12-06 04:11:32.446452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.446511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.446526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:45.124 [2024-12-06 04:11:32.446535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:45.124 [2024-12-06 04:11:32.446544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.446629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.124 [2024-12-06 04:11:32.446642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:45.124 [2024-12-06 04:11:32.446650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:45.124 [2024-12-06 04:11:32.446658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.124 [2024-12-06 04:11:32.447525] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2484.922 ms, result 0 00:20:45.124 { 00:20:45.124 "name": "ftl0", 00:20:45.124 "uuid": "e03f0122-0289-413c-8253-66d39dfc231f" 00:20:45.124 } 00:20:45.124 04:11:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:20:45.124 04:11:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:20:45.124 04:11:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:20:45.124 04:11:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:20:45.384 [2024-12-06 04:11:32.727853] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:45.384 I/O size of 69632 is greater than zero copy threshold (65536). 00:20:45.384 Zero copy mechanism will not be used. 00:20:45.384 Running I/O for 4 seconds... 00:20:47.314 2594.00 IOPS, 172.26 MiB/s [2024-12-06T04:11:35.783Z] 2589.00 IOPS, 171.93 MiB/s [2024-12-06T04:11:37.156Z] 2458.00 IOPS, 163.23 MiB/s [2024-12-06T04:11:37.156Z] 2649.00 IOPS, 175.91 MiB/s 00:20:49.629 Latency(us) 00:20:49.629 [2024-12-06T04:11:37.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.629 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:20:49.629 ftl0 : 4.00 2648.19 175.86 0.00 0.00 397.74 154.39 82272.89 00:20:49.629 [2024-12-06T04:11:37.156Z] =================================================================================================================== 00:20:49.629 [2024-12-06T04:11:37.156Z] Total : 2648.19 175.86 0.00 0.00 397.74 154.39 82272.89 00:20:49.629 { 00:20:49.629 "results": [ 00:20:49.629 { 00:20:49.629 "job": "ftl0", 00:20:49.629 "core_mask": "0x1", 00:20:49.629 "workload": "randwrite", 00:20:49.629 "status": "finished", 00:20:49.629 "queue_depth": 1, 00:20:49.629 "io_size": 69632, 00:20:49.629 "runtime": 4.001604, 00:20:49.629 "iops": 2648.1880765812907, 00:20:49.629 "mibps": 175.85623946047633, 00:20:49.629 "io_failed": 0, 00:20:49.629 "io_timeout": 0, 00:20:49.629 "avg_latency_us": 397.7374891297246, 00:20:49.629 "min_latency_us": 154.3876923076923, 00:20:49.629 "max_latency_us": 82272.88615384615 00:20:49.629 } 00:20:49.629 ], 00:20:49.629 "core_count": 1 00:20:49.629 } 00:20:49.629 [2024-12-06 04:11:36.737333] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:49.629 04:11:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:20:49.629 [2024-12-06 04:11:36.834271] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:49.629 Running I/O for 4 seconds... 00:20:51.513 9626.00 IOPS, 37.60 MiB/s [2024-12-06T04:11:39.984Z] 7904.50 IOPS, 30.88 MiB/s [2024-12-06T04:11:40.924Z] 7749.00 IOPS, 30.27 MiB/s [2024-12-06T04:11:40.924Z] 7770.50 IOPS, 30.35 MiB/s 00:20:53.397 Latency(us) 00:20:53.397 [2024-12-06T04:11:40.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.397 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:20:53.397 ftl0 : 4.02 7762.40 30.32 0.00 0.00 16446.78 266.24 47992.52 00:20:53.397 [2024-12-06T04:11:40.924Z] =================================================================================================================== 00:20:53.397 [2024-12-06T04:11:40.924Z] Total : 7762.40 30.32 0.00 0.00 16446.78 0.00 47992.52 00:20:53.397 [2024-12-06 04:11:40.863691] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:53.397 { 00:20:53.397 "results": [ 00:20:53.397 { 00:20:53.397 "job": "ftl0", 00:20:53.397 "core_mask": "0x1", 00:20:53.397 "workload": "randwrite", 00:20:53.397 "status": "finished", 00:20:53.397 "queue_depth": 128, 00:20:53.397 "io_size": 4096, 00:20:53.397 "runtime": 4.020535, 00:20:53.397 "iops": 7762.399780128764, 00:20:53.397 "mibps": 30.321874141127985, 00:20:53.397 "io_failed": 0, 00:20:53.397 "io_timeout": 0, 00:20:53.397 "avg_latency_us": 16446.77505630772, 00:20:53.397 "min_latency_us": 266.24, 00:20:53.397 "max_latency_us": 47992.516923076924 00:20:53.397 } 00:20:53.397 ], 00:20:53.397 "core_count": 1 00:20:53.397 } 00:20:53.397 04:11:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:20:53.657 [2024-12-06 04:11:40.969803] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:53.657 Running I/O for 4 seconds... 00:20:55.533 7532.00 IOPS, 29.42 MiB/s [2024-12-06T04:11:44.034Z] 7600.00 IOPS, 29.69 MiB/s [2024-12-06T04:11:45.413Z] 7635.33 IOPS, 29.83 MiB/s [2024-12-06T04:11:45.413Z] 7882.25 IOPS, 30.79 MiB/s 00:20:57.886 Latency(us) 00:20:57.886 [2024-12-06T04:11:45.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.886 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:57.886 Verification LBA range: start 0x0 length 0x1400000 00:20:57.886 ftl0 : 4.01 7895.68 30.84 0.00 0.00 16162.03 269.39 27424.30 00:20:57.886 [2024-12-06T04:11:45.413Z] =================================================================================================================== 00:20:57.886 [2024-12-06T04:11:45.413Z] Total : 7895.68 30.84 0.00 0.00 16162.03 0.00 27424.30 00:20:57.886 [2024-12-06 04:11:44.994498] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:57.886 { 00:20:57.886 "results": [ 00:20:57.886 { 00:20:57.886 "job": "ftl0", 00:20:57.886 "core_mask": "0x1", 00:20:57.886 "workload": "verify", 00:20:57.886 "status": "finished", 00:20:57.886 "verify_range": { 00:20:57.886 "start": 0, 00:20:57.886 "length": 20971520 00:20:57.886 }, 00:20:57.886 "queue_depth": 128, 00:20:57.886 "io_size": 4096, 00:20:57.886 "runtime": 4.00928, 00:20:57.886 "iops": 7895.682017718892, 00:20:57.886 "mibps": 30.842507881714422, 00:20:57.886 "io_failed": 0, 00:20:57.886 "io_timeout": 0, 00:20:57.886 "avg_latency_us": 16162.02631208569, 00:20:57.886 "min_latency_us": 269.39076923076925, 00:20:57.886 "max_latency_us": 27424.295384615383 00:20:57.886 } 00:20:57.886 ], 00:20:57.886 "core_count": 1 00:20:57.886 } 00:20:57.886 04:11:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:57.886 [2024-12-06 04:11:45.252672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.886 [2024-12-06 04:11:45.253864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:57.886 [2024-12-06 04:11:45.253954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:57.886 [2024-12-06 04:11:45.253983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.886 [2024-12-06 04:11:45.254030] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:57.886 [2024-12-06 04:11:45.256770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.886 [2024-12-06 04:11:45.256885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:57.886 [2024-12-06 04:11:45.256949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:20:57.886 [2024-12-06 04:11:45.256972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.886 [2024-12-06 04:11:45.258393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.886 [2024-12-06 04:11:45.258505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:57.886 [2024-12-06 04:11:45.258581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.322 ms 00:20:57.886 [2024-12-06 04:11:45.258593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.146 [2024-12-06 04:11:45.434494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.147 [2024-12-06 04:11:45.434670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:58.147 [2024-12-06 04:11:45.434765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 175.856 ms 00:20:58.147 [2024-12-06 04:11:45.434790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.147 [2024-12-06 04:11:45.441013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.147 [2024-12-06 04:11:45.441149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:58.147 [2024-12-06 04:11:45.441210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.172 ms 00:20:58.147 [2024-12-06 04:11:45.441236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.147 [2024-12-06 04:11:45.464451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.147 [2024-12-06 04:11:45.464602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:58.147 [2024-12-06 04:11:45.464661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.136 ms 00:20:58.147 [2024-12-06 04:11:45.464703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.147 [2024-12-06 04:11:45.479213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.147 [2024-12-06 04:11:45.479362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:58.147 [2024-12-06 04:11:45.479421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.406 ms 00:20:58.147 [2024-12-06 04:11:45.479444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.147 [2024-12-06 04:11:45.479600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.147 [2024-12-06 04:11:45.479628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:58.147 [2024-12-06 04:11:45.479653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:20:58.147 [2024-12-06 04:11:45.479698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.147 [2024-12-06 04:11:45.502819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.147 [2024-12-06 04:11:45.502960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:58.147 [2024-12-06 04:11:45.503019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.078 ms 00:20:58.147 [2024-12-06 04:11:45.503042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.147 [2024-12-06 04:11:45.525730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.147 [2024-12-06 04:11:45.525872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:58.147 [2024-12-06 04:11:45.525925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.643 ms 00:20:58.147 [2024-12-06 04:11:45.525948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.147 [2024-12-06 04:11:45.548129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.147 [2024-12-06 04:11:45.548260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:58.147 [2024-12-06 04:11:45.548318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.126 ms 00:20:58.147 [2024-12-06 04:11:45.548339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.147 [2024-12-06 04:11:45.571268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.147 [2024-12-06 04:11:45.571405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:58.147 [2024-12-06 04:11:45.571465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.847 ms 00:20:58.147 [2024-12-06 04:11:45.571488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.147 [2024-12-06 04:11:45.571549] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:58.147 [2024-12-06 04:11:45.571578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.571642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.571673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.571704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.571786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.571818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.571847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.571900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.571962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.572984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:58.147 [2024-12-06 04:11:45.573566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.573998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.574007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:58.148 [2024-12-06 04:11:45.574024] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:58.148 [2024-12-06 04:11:45.574033] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e03f0122-0289-413c-8253-66d39dfc231f 00:20:58.148 [2024-12-06 04:11:45.574043] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:58.148 [2024-12-06 04:11:45.574052] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:58.148 [2024-12-06 04:11:45.574058] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:58.148 [2024-12-06 04:11:45.574068] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:58.148 [2024-12-06 04:11:45.574074] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:58.148 [2024-12-06 04:11:45.574083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:58.148 [2024-12-06 04:11:45.574091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:58.148 [2024-12-06 04:11:45.574101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:58.148 [2024-12-06 04:11:45.574107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:58.148 [2024-12-06 04:11:45.574116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.148 [2024-12-06 04:11:45.574123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:58.148 [2024-12-06 04:11:45.574134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.569 ms 00:20:58.148 [2024-12-06 04:11:45.574141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.148 [2024-12-06 04:11:45.586613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.148 [2024-12-06 04:11:45.586761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:58.148 [2024-12-06 04:11:45.586817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.429 ms 00:20:58.148 [2024-12-06 04:11:45.586839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.148 [2024-12-06 04:11:45.587219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.148 [2024-12-06 04:11:45.587290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:58.148 [2024-12-06 04:11:45.587339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:20:58.148 [2024-12-06 04:11:45.587388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.148 [2024-12-06 04:11:45.621810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.148 [2024-12-06 04:11:45.621969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.148 [2024-12-06 04:11:45.622023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.148 [2024-12-06 04:11:45.622046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.148 [2024-12-06 04:11:45.622124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.148 [2024-12-06 04:11:45.622145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.148 [2024-12-06 04:11:45.622166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.148 [2024-12-06 04:11:45.622184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.148 [2024-12-06 04:11:45.622292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.148 [2024-12-06 04:11:45.622421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.148 [2024-12-06 04:11:45.622443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.148 [2024-12-06 04:11:45.622461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.148 [2024-12-06 04:11:45.622501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.148 [2024-12-06 04:11:45.622521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.148 [2024-12-06 04:11:45.622542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.148 [2024-12-06 04:11:45.622646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.407 [2024-12-06 04:11:45.699332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.407 [2024-12-06 04:11:45.699494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.407 [2024-12-06 04:11:45.699556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.407 [2024-12-06 04:11:45.699600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.407 [2024-12-06 04:11:45.761761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.407 [2024-12-06 04:11:45.761921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:58.407 [2024-12-06 04:11:45.761939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.407 [2024-12-06 04:11:45.761947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.407 [2024-12-06 04:11:45.762020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.407 [2024-12-06 04:11:45.762029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.407 [2024-12-06 04:11:45.762039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.407 [2024-12-06 04:11:45.762046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.407 [2024-12-06 04:11:45.762102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.407 [2024-12-06 04:11:45.762111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.407 [2024-12-06 04:11:45.762120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.407 [2024-12-06 04:11:45.762128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.407 [2024-12-06 04:11:45.762219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.407 [2024-12-06 04:11:45.762231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.407 [2024-12-06 04:11:45.762243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.407 [2024-12-06 04:11:45.762251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.407 [2024-12-06 04:11:45.762281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.407 [2024-12-06 04:11:45.762290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:58.407 [2024-12-06 04:11:45.762299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.407 [2024-12-06 04:11:45.762306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.408 [2024-12-06 04:11:45.762339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.408 [2024-12-06 04:11:45.762349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.408 [2024-12-06 04:11:45.762359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.408 [2024-12-06 04:11:45.762372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.408 [2024-12-06 04:11:45.762413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.408 [2024-12-06 04:11:45.762424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.408 [2024-12-06 04:11:45.762433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.408 [2024-12-06 04:11:45.762441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.408 [2024-12-06 04:11:45.762570] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 509.866 ms, result 0 00:20:58.408 true 00:20:58.408 04:11:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75886 00:20:58.408 04:11:45 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75886 ']' 00:20:58.408 04:11:45 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75886 00:20:58.408 04:11:45 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:20:58.408 04:11:45 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.408 04:11:45 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75886 00:20:58.408 killing process with pid 75886 00:20:58.408 Received shutdown signal, test time was about 4.000000 seconds 00:20:58.408 00:20:58.408 Latency(us) 00:20:58.408 [2024-12-06T04:11:45.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.408 [2024-12-06T04:11:45.935Z] =================================================================================================================== 00:20:58.408 [2024-12-06T04:11:45.935Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.408 04:11:45 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:58.408 04:11:45 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:58.408 04:11:45 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75886' 00:20:58.408 04:11:45 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75886 00:20:58.408 04:11:45 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75886 00:20:59.348 04:11:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:59.348 04:11:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:20:59.348 Remove shared memory files 00:20:59.348 04:11:46 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:59.348 04:11:46 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:59.348 04:11:46 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:59.348 04:11:46 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:59.348 04:11:46 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:59.348 04:11:46 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:59.348 ************************************ 00:20:59.348 END TEST ftl_bdevperf 00:20:59.348 ************************************ 00:20:59.348 00:20:59.348 real 0m20.812s 00:20:59.348 user 0m23.621s 00:20:59.348 sys 0m0.846s 00:20:59.348 04:11:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.348 04:11:46 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:59.348 04:11:46 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:59.348 04:11:46 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:59.348 04:11:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.348 04:11:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:59.348 ************************************ 00:20:59.348 START TEST ftl_trim 00:20:59.348 ************************************ 00:20:59.348 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:59.348 * Looking for test storage... 00:20:59.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:59.348 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:59.348 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:20:59.348 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:59.348 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.348 04:11:46 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:20:59.348 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.348 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:59.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.348 --rc genhtml_branch_coverage=1 00:20:59.348 --rc genhtml_function_coverage=1 00:20:59.348 --rc genhtml_legend=1 00:20:59.348 --rc geninfo_all_blocks=1 00:20:59.348 --rc geninfo_unexecuted_blocks=1 00:20:59.348 00:20:59.348 ' 00:20:59.348 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:59.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.348 --rc genhtml_branch_coverage=1 00:20:59.348 --rc genhtml_function_coverage=1 00:20:59.348 --rc genhtml_legend=1 00:20:59.348 --rc geninfo_all_blocks=1 00:20:59.348 --rc geninfo_unexecuted_blocks=1 00:20:59.348 00:20:59.348 ' 00:20:59.348 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:59.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.348 --rc genhtml_branch_coverage=1 00:20:59.348 --rc genhtml_function_coverage=1 00:20:59.348 --rc genhtml_legend=1 00:20:59.348 --rc geninfo_all_blocks=1 00:20:59.348 --rc geninfo_unexecuted_blocks=1 00:20:59.348 00:20:59.348 ' 00:20:59.348 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:59.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.348 --rc genhtml_branch_coverage=1 00:20:59.348 --rc genhtml_function_coverage=1 00:20:59.348 --rc genhtml_legend=1 00:20:59.348 --rc geninfo_all_blocks=1 00:20:59.348 --rc geninfo_unexecuted_blocks=1 00:20:59.348 00:20:59.348 ' 00:20:59.348 04:11:46 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:59.348 04:11:46 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:59.348 04:11:46 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76222 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76222 00:20:59.349 04:11:46 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:59.349 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76222 ']' 00:20:59.349 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.349 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.349 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.349 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.349 04:11:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:59.609 [2024-12-06 04:11:46.926065] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:20:59.609 [2024-12-06 04:11:46.926678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76222 ] 00:20:59.609 [2024-12-06 04:11:47.080835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:59.871 [2024-12-06 04:11:47.166824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.871 [2024-12-06 04:11:47.166881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.871 [2024-12-06 04:11:47.166907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.444 04:11:47 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.444 04:11:47 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:00.444 04:11:47 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:00.444 04:11:47 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:00.444 04:11:47 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:00.444 04:11:47 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:00.444 04:11:47 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:00.444 04:11:47 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:00.706 04:11:48 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:00.706 04:11:48 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:00.706 04:11:48 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:00.706 04:11:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:00.706 04:11:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:00.706 04:11:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:00.706 04:11:48 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:00.706 04:11:48 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:00.967 04:11:48 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:00.967 { 00:21:00.967 "name": "nvme0n1", 00:21:00.967 "aliases": [ 00:21:00.967 "23b6b321-cf60-4e44-bba0-f1ebf3c9a9cf" 00:21:00.967 ], 00:21:00.967 "product_name": "NVMe disk", 00:21:00.967 "block_size": 4096, 00:21:00.967 "num_blocks": 1310720, 00:21:00.967 "uuid": "23b6b321-cf60-4e44-bba0-f1ebf3c9a9cf", 00:21:00.967 "numa_id": -1, 00:21:00.967 "assigned_rate_limits": { 00:21:00.967 "rw_ios_per_sec": 0, 00:21:00.967 "rw_mbytes_per_sec": 0, 00:21:00.967 "r_mbytes_per_sec": 0, 00:21:00.967 "w_mbytes_per_sec": 0 00:21:00.967 }, 00:21:00.967 "claimed": true, 00:21:00.967 "claim_type": "read_many_write_one", 00:21:00.967 "zoned": false, 00:21:00.967 "supported_io_types": { 00:21:00.967 "read": true, 00:21:00.967 "write": true, 00:21:00.967 "unmap": true, 00:21:00.967 "flush": true, 00:21:00.967 "reset": true, 00:21:00.967 "nvme_admin": true, 00:21:00.967 "nvme_io": true, 00:21:00.967 "nvme_io_md": false, 00:21:00.967 "write_zeroes": true, 00:21:00.967 "zcopy": false, 00:21:00.967 "get_zone_info": false, 00:21:00.967 "zone_management": false, 00:21:00.967 "zone_append": false, 00:21:00.967 "compare": true, 00:21:00.967 "compare_and_write": false, 00:21:00.967 "abort": true, 00:21:00.967 "seek_hole": false, 00:21:00.967 "seek_data": false, 00:21:00.967 "copy": true, 00:21:00.967 "nvme_iov_md": false 00:21:00.967 }, 00:21:00.968 "driver_specific": { 00:21:00.968 "nvme": [ 00:21:00.968 { 00:21:00.968 "pci_address": "0000:00:11.0", 00:21:00.968 "trid": { 00:21:00.968 "trtype": "PCIe", 00:21:00.968 "traddr": "0000:00:11.0" 00:21:00.968 }, 00:21:00.968 "ctrlr_data": { 00:21:00.968 "cntlid": 0, 00:21:00.968 "vendor_id": "0x1b36", 00:21:00.968 "model_number": "QEMU NVMe Ctrl", 00:21:00.968 "serial_number": "12341", 00:21:00.968 "firmware_revision": "8.0.0", 00:21:00.968 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:00.968 "oacs": { 00:21:00.968 "security": 0, 00:21:00.968 "format": 1, 00:21:00.968 "firmware": 0, 00:21:00.968 "ns_manage": 1 00:21:00.968 }, 00:21:00.968 "multi_ctrlr": false, 00:21:00.968 "ana_reporting": false 00:21:00.968 }, 00:21:00.968 "vs": { 00:21:00.968 "nvme_version": "1.4" 00:21:00.968 }, 00:21:00.968 "ns_data": { 00:21:00.968 "id": 1, 00:21:00.968 "can_share": false 00:21:00.968 } 00:21:00.968 } 00:21:00.968 ], 00:21:00.968 "mp_policy": "active_passive" 00:21:00.968 } 00:21:00.968 } 00:21:00.968 ]' 00:21:00.968 04:11:48 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:00.968 04:11:48 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:00.968 04:11:48 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:00.968 04:11:48 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:00.968 04:11:48 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:00.968 04:11:48 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:21:00.968 04:11:48 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:00.968 04:11:48 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:00.968 04:11:48 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:00.968 04:11:48 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:00.968 04:11:48 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:01.229 04:11:48 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=90fad635-52d3-4dbb-b3d3-b102793492c8 00:21:01.229 04:11:48 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:01.229 04:11:48 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 90fad635-52d3-4dbb-b3d3-b102793492c8 00:21:01.490 04:11:48 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:01.490 04:11:48 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=db32acbf-b677-4b15-bfaa-0b5275d12596 00:21:01.490 04:11:48 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u db32acbf-b677-4b15-bfaa-0b5275d12596 00:21:01.752 04:11:49 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=f60bad50-d738-4eaa-a035-a620a7eefc48 00:21:01.752 04:11:49 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f60bad50-d738-4eaa-a035-a620a7eefc48 00:21:01.752 04:11:49 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:01.752 04:11:49 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:01.752 04:11:49 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=f60bad50-d738-4eaa-a035-a620a7eefc48 00:21:01.752 04:11:49 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:01.752 04:11:49 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size f60bad50-d738-4eaa-a035-a620a7eefc48 00:21:01.752 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f60bad50-d738-4eaa-a035-a620a7eefc48 00:21:01.752 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:01.752 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:01.752 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:01.752 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f60bad50-d738-4eaa-a035-a620a7eefc48 00:21:02.014 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:02.014 { 00:21:02.014 "name": "f60bad50-d738-4eaa-a035-a620a7eefc48", 00:21:02.014 "aliases": [ 00:21:02.014 "lvs/nvme0n1p0" 00:21:02.014 ], 00:21:02.014 "product_name": "Logical Volume", 00:21:02.014 "block_size": 4096, 00:21:02.014 "num_blocks": 26476544, 00:21:02.014 "uuid": "f60bad50-d738-4eaa-a035-a620a7eefc48", 00:21:02.014 "assigned_rate_limits": { 00:21:02.014 "rw_ios_per_sec": 0, 00:21:02.014 "rw_mbytes_per_sec": 0, 00:21:02.014 "r_mbytes_per_sec": 0, 00:21:02.014 "w_mbytes_per_sec": 0 00:21:02.014 }, 00:21:02.014 "claimed": false, 00:21:02.014 "zoned": false, 00:21:02.014 "supported_io_types": { 00:21:02.014 "read": true, 00:21:02.014 "write": true, 00:21:02.014 "unmap": true, 00:21:02.014 "flush": false, 00:21:02.014 "reset": true, 00:21:02.014 "nvme_admin": false, 00:21:02.014 "nvme_io": false, 00:21:02.014 "nvme_io_md": false, 00:21:02.014 "write_zeroes": true, 00:21:02.014 "zcopy": false, 00:21:02.014 "get_zone_info": false, 00:21:02.014 "zone_management": false, 00:21:02.014 "zone_append": false, 00:21:02.014 "compare": false, 00:21:02.014 "compare_and_write": false, 00:21:02.014 "abort": false, 00:21:02.014 "seek_hole": true, 00:21:02.014 "seek_data": true, 00:21:02.014 "copy": false, 00:21:02.014 "nvme_iov_md": false 00:21:02.014 }, 00:21:02.014 "driver_specific": { 00:21:02.014 "lvol": { 00:21:02.014 "lvol_store_uuid": "db32acbf-b677-4b15-bfaa-0b5275d12596", 00:21:02.014 "base_bdev": "nvme0n1", 00:21:02.014 "thin_provision": true, 00:21:02.014 "num_allocated_clusters": 0, 00:21:02.014 "snapshot": false, 00:21:02.014 "clone": false, 00:21:02.014 "esnap_clone": false 00:21:02.014 } 00:21:02.014 } 00:21:02.014 } 00:21:02.014 ]' 00:21:02.014 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:02.014 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:02.014 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:02.014 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:02.015 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:02.015 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:02.015 04:11:49 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:02.015 04:11:49 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:02.015 04:11:49 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:02.276 04:11:49 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:02.276 04:11:49 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:02.276 04:11:49 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size f60bad50-d738-4eaa-a035-a620a7eefc48 00:21:02.276 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f60bad50-d738-4eaa-a035-a620a7eefc48 00:21:02.276 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:02.276 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:02.276 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:02.276 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f60bad50-d738-4eaa-a035-a620a7eefc48 00:21:02.539 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:02.539 { 00:21:02.539 "name": "f60bad50-d738-4eaa-a035-a620a7eefc48", 00:21:02.539 "aliases": [ 00:21:02.539 "lvs/nvme0n1p0" 00:21:02.539 ], 00:21:02.539 "product_name": "Logical Volume", 00:21:02.539 "block_size": 4096, 00:21:02.539 "num_blocks": 26476544, 00:21:02.539 "uuid": "f60bad50-d738-4eaa-a035-a620a7eefc48", 00:21:02.539 "assigned_rate_limits": { 00:21:02.539 "rw_ios_per_sec": 0, 00:21:02.539 "rw_mbytes_per_sec": 0, 00:21:02.539 "r_mbytes_per_sec": 0, 00:21:02.539 "w_mbytes_per_sec": 0 00:21:02.539 }, 00:21:02.539 "claimed": false, 00:21:02.539 "zoned": false, 00:21:02.539 "supported_io_types": { 00:21:02.539 "read": true, 00:21:02.539 "write": true, 00:21:02.539 "unmap": true, 00:21:02.539 "flush": false, 00:21:02.539 "reset": true, 00:21:02.539 "nvme_admin": false, 00:21:02.539 "nvme_io": false, 00:21:02.539 "nvme_io_md": false, 00:21:02.539 "write_zeroes": true, 00:21:02.539 "zcopy": false, 00:21:02.539 "get_zone_info": false, 00:21:02.539 "zone_management": false, 00:21:02.539 "zone_append": false, 00:21:02.539 "compare": false, 00:21:02.539 "compare_and_write": false, 00:21:02.539 "abort": false, 00:21:02.539 "seek_hole": true, 00:21:02.539 "seek_data": true, 00:21:02.539 "copy": false, 00:21:02.539 "nvme_iov_md": false 00:21:02.539 }, 00:21:02.539 "driver_specific": { 00:21:02.539 "lvol": { 00:21:02.539 "lvol_store_uuid": "db32acbf-b677-4b15-bfaa-0b5275d12596", 00:21:02.539 "base_bdev": "nvme0n1", 00:21:02.539 "thin_provision": true, 00:21:02.539 "num_allocated_clusters": 0, 00:21:02.539 "snapshot": false, 00:21:02.539 "clone": false, 00:21:02.539 "esnap_clone": false 00:21:02.539 } 00:21:02.539 } 00:21:02.539 } 00:21:02.539 ]' 00:21:02.539 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:02.539 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:02.539 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:02.539 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:02.539 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:02.539 04:11:49 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:02.539 04:11:49 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:02.539 04:11:49 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:02.800 04:11:50 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:02.800 04:11:50 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:02.800 04:11:50 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size f60bad50-d738-4eaa-a035-a620a7eefc48 00:21:02.800 04:11:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f60bad50-d738-4eaa-a035-a620a7eefc48 00:21:02.800 04:11:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:02.800 04:11:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:02.800 04:11:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:02.800 04:11:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f60bad50-d738-4eaa-a035-a620a7eefc48 00:21:03.062 04:11:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:03.062 { 00:21:03.062 "name": "f60bad50-d738-4eaa-a035-a620a7eefc48", 00:21:03.062 "aliases": [ 00:21:03.062 "lvs/nvme0n1p0" 00:21:03.062 ], 00:21:03.062 "product_name": "Logical Volume", 00:21:03.062 "block_size": 4096, 00:21:03.062 "num_blocks": 26476544, 00:21:03.062 "uuid": "f60bad50-d738-4eaa-a035-a620a7eefc48", 00:21:03.062 "assigned_rate_limits": { 00:21:03.062 "rw_ios_per_sec": 0, 00:21:03.062 "rw_mbytes_per_sec": 0, 00:21:03.062 "r_mbytes_per_sec": 0, 00:21:03.062 "w_mbytes_per_sec": 0 00:21:03.062 }, 00:21:03.062 "claimed": false, 00:21:03.062 "zoned": false, 00:21:03.062 "supported_io_types": { 00:21:03.062 "read": true, 00:21:03.062 "write": true, 00:21:03.062 "unmap": true, 00:21:03.062 "flush": false, 00:21:03.062 "reset": true, 00:21:03.062 "nvme_admin": false, 00:21:03.062 "nvme_io": false, 00:21:03.062 "nvme_io_md": false, 00:21:03.062 "write_zeroes": true, 00:21:03.062 "zcopy": false, 00:21:03.062 "get_zone_info": false, 00:21:03.062 "zone_management": false, 00:21:03.062 "zone_append": false, 00:21:03.062 "compare": false, 00:21:03.062 "compare_and_write": false, 00:21:03.062 "abort": false, 00:21:03.062 "seek_hole": true, 00:21:03.062 "seek_data": true, 00:21:03.062 "copy": false, 00:21:03.062 "nvme_iov_md": false 00:21:03.062 }, 00:21:03.062 "driver_specific": { 00:21:03.062 "lvol": { 00:21:03.062 "lvol_store_uuid": "db32acbf-b677-4b15-bfaa-0b5275d12596", 00:21:03.062 "base_bdev": "nvme0n1", 00:21:03.062 "thin_provision": true, 00:21:03.062 "num_allocated_clusters": 0, 00:21:03.062 "snapshot": false, 00:21:03.062 "clone": false, 00:21:03.062 "esnap_clone": false 00:21:03.062 } 00:21:03.062 } 00:21:03.062 } 00:21:03.062 ]' 00:21:03.062 04:11:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:03.062 04:11:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:03.062 04:11:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:03.062 04:11:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:03.062 04:11:50 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:03.062 04:11:50 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:03.062 04:11:50 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:03.062 04:11:50 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f60bad50-d738-4eaa-a035-a620a7eefc48 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:03.325 [2024-12-06 04:11:50.604845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.325 [2024-12-06 04:11:50.605005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:03.325 [2024-12-06 04:11:50.605025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:03.325 [2024-12-06 04:11:50.605033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.325 [2024-12-06 04:11:50.607317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.325 [2024-12-06 04:11:50.607346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:03.325 [2024-12-06 04:11:50.607354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:21:03.325 [2024-12-06 04:11:50.607360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.325 [2024-12-06 04:11:50.607444] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:03.325 [2024-12-06 04:11:50.608058] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:03.325 [2024-12-06 04:11:50.608082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.325 [2024-12-06 04:11:50.608088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:03.325 [2024-12-06 04:11:50.608097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:21:03.325 [2024-12-06 04:11:50.608102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.325 [2024-12-06 04:11:50.608300] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 35e3cd2c-a5a2-441a-aebe-c05fd677fe36 00:21:03.325 [2024-12-06 04:11:50.609232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.325 [2024-12-06 04:11:50.609258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:03.325 [2024-12-06 04:11:50.609266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:03.325 [2024-12-06 04:11:50.609273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.325 [2024-12-06 04:11:50.613976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.325 [2024-12-06 04:11:50.614081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:03.325 [2024-12-06 04:11:50.614094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.651 ms 00:21:03.325 [2024-12-06 04:11:50.614101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.325 [2024-12-06 04:11:50.614197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.325 [2024-12-06 04:11:50.614207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:03.325 [2024-12-06 04:11:50.614214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:03.325 [2024-12-06 04:11:50.614223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.325 [2024-12-06 04:11:50.614247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.325 [2024-12-06 04:11:50.614254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:03.325 [2024-12-06 04:11:50.614261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:03.325 [2024-12-06 04:11:50.614269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.325 [2024-12-06 04:11:50.614290] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:03.325 [2024-12-06 04:11:50.617226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.325 [2024-12-06 04:11:50.617315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:03.325 [2024-12-06 04:11:50.617331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.938 ms 00:21:03.325 [2024-12-06 04:11:50.617338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.325 [2024-12-06 04:11:50.617376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.325 [2024-12-06 04:11:50.617393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:03.325 [2024-12-06 04:11:50.617401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:03.325 [2024-12-06 04:11:50.617407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.325 [2024-12-06 04:11:50.617439] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:03.325 [2024-12-06 04:11:50.617552] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:03.325 [2024-12-06 04:11:50.617565] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:03.325 [2024-12-06 04:11:50.617573] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:03.325 [2024-12-06 04:11:50.617582] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:03.325 [2024-12-06 04:11:50.617589] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:03.325 [2024-12-06 04:11:50.617597] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:03.325 [2024-12-06 04:11:50.617602] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:03.325 [2024-12-06 04:11:50.617610] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:03.325 [2024-12-06 04:11:50.617618] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:03.325 [2024-12-06 04:11:50.617625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.325 [2024-12-06 04:11:50.617631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:03.325 [2024-12-06 04:11:50.617638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:21:03.325 [2024-12-06 04:11:50.617644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.325 [2024-12-06 04:11:50.617737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.325 [2024-12-06 04:11:50.617745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:03.325 [2024-12-06 04:11:50.617753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:03.325 [2024-12-06 04:11:50.617758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.325 [2024-12-06 04:11:50.617855] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:03.325 [2024-12-06 04:11:50.617863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:03.325 [2024-12-06 04:11:50.617870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:03.325 [2024-12-06 04:11:50.617876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.326 [2024-12-06 04:11:50.617884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:03.326 [2024-12-06 04:11:50.617889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:03.326 [2024-12-06 04:11:50.617895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:03.326 [2024-12-06 04:11:50.617901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:03.326 [2024-12-06 04:11:50.617908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:03.326 [2024-12-06 04:11:50.617913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:03.326 [2024-12-06 04:11:50.617931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:03.326 [2024-12-06 04:11:50.617937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:03.326 [2024-12-06 04:11:50.617945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:03.326 [2024-12-06 04:11:50.617950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:03.326 [2024-12-06 04:11:50.617957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:03.326 [2024-12-06 04:11:50.617963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.326 [2024-12-06 04:11:50.617971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:03.326 [2024-12-06 04:11:50.617976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:03.326 [2024-12-06 04:11:50.617982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.326 [2024-12-06 04:11:50.617988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:03.326 [2024-12-06 04:11:50.617994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:03.326 [2024-12-06 04:11:50.618000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.326 [2024-12-06 04:11:50.618006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:03.326 [2024-12-06 04:11:50.618011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:03.326 [2024-12-06 04:11:50.618018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.326 [2024-12-06 04:11:50.618023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:03.326 [2024-12-06 04:11:50.618030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:03.326 [2024-12-06 04:11:50.618037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.326 [2024-12-06 04:11:50.618044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:03.326 [2024-12-06 04:11:50.618049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:03.326 [2024-12-06 04:11:50.618055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.326 [2024-12-06 04:11:50.618061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:03.326 [2024-12-06 04:11:50.618068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:03.326 [2024-12-06 04:11:50.618073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:03.326 [2024-12-06 04:11:50.618080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:03.326 [2024-12-06 04:11:50.618085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:03.326 [2024-12-06 04:11:50.618091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:03.326 [2024-12-06 04:11:50.618096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:03.326 [2024-12-06 04:11:50.618104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:03.326 [2024-12-06 04:11:50.618109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.326 [2024-12-06 04:11:50.618116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:03.326 [2024-12-06 04:11:50.618121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:03.326 [2024-12-06 04:11:50.618127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.326 [2024-12-06 04:11:50.618132] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:03.326 [2024-12-06 04:11:50.618139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:03.326 [2024-12-06 04:11:50.618145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:03.326 [2024-12-06 04:11:50.618151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.326 [2024-12-06 04:11:50.618158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:03.326 [2024-12-06 04:11:50.618166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:03.326 [2024-12-06 04:11:50.618171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:03.326 [2024-12-06 04:11:50.618178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:03.326 [2024-12-06 04:11:50.618183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:03.326 [2024-12-06 04:11:50.618189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:03.326 [2024-12-06 04:11:50.618195] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:03.326 [2024-12-06 04:11:50.618204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:03.326 [2024-12-06 04:11:50.618212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:03.326 [2024-12-06 04:11:50.618219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:03.326 [2024-12-06 04:11:50.618225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:03.326 [2024-12-06 04:11:50.618232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:03.326 [2024-12-06 04:11:50.618238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:03.326 [2024-12-06 04:11:50.618244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:03.326 [2024-12-06 04:11:50.618250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:03.326 [2024-12-06 04:11:50.618257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:03.326 [2024-12-06 04:11:50.618262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:03.326 [2024-12-06 04:11:50.618271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:03.326 [2024-12-06 04:11:50.618276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:03.326 [2024-12-06 04:11:50.618283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:03.326 [2024-12-06 04:11:50.618289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:03.326 [2024-12-06 04:11:50.618296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:03.326 [2024-12-06 04:11:50.618301] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:03.326 [2024-12-06 04:11:50.618311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:03.326 [2024-12-06 04:11:50.618317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:03.326 [2024-12-06 04:11:50.618324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:03.326 [2024-12-06 04:11:50.618330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:03.326 [2024-12-06 04:11:50.618337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:03.326 [2024-12-06 04:11:50.618343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.326 [2024-12-06 04:11:50.618351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:03.326 [2024-12-06 04:11:50.618356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:21:03.326 [2024-12-06 04:11:50.618363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.326 [2024-12-06 04:11:50.618426] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:03.326 [2024-12-06 04:11:50.618437] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:05.901 [2024-12-06 04:11:52.959641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:52.959699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:05.901 [2024-12-06 04:11:52.959729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2341.205 ms 00:21:05.901 [2024-12-06 04:11:52.959741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:52.984957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:52.985007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:05.901 [2024-12-06 04:11:52.985020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.975 ms 00:21:05.901 [2024-12-06 04:11:52.985030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:52.985166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:52.985178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:05.901 [2024-12-06 04:11:52.985201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:05.901 [2024-12-06 04:11:52.985213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:53.038113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:53.038163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:05.901 [2024-12-06 04:11:53.038176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.867 ms 00:21:05.901 [2024-12-06 04:11:53.038187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:53.038292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:53.038305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:05.901 [2024-12-06 04:11:53.038314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:05.901 [2024-12-06 04:11:53.038323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:53.038668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:53.038687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:05.901 [2024-12-06 04:11:53.038697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:21:05.901 [2024-12-06 04:11:53.038706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:53.038838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:53.038849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:05.901 [2024-12-06 04:11:53.038870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:21:05.901 [2024-12-06 04:11:53.038881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:53.052990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:53.053024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:05.901 [2024-12-06 04:11:53.053033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.084 ms 00:21:05.901 [2024-12-06 04:11:53.053043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:53.064277] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:05.901 [2024-12-06 04:11:53.078571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:53.078606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:05.901 [2024-12-06 04:11:53.078618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.432 ms 00:21:05.901 [2024-12-06 04:11:53.078626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:53.146516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:53.146702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:05.901 [2024-12-06 04:11:53.146740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.813 ms 00:21:05.901 [2024-12-06 04:11:53.146749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:53.146975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:53.146987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:05.901 [2024-12-06 04:11:53.147000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:21:05.901 [2024-12-06 04:11:53.147008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:53.170080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:53.170118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:05.901 [2024-12-06 04:11:53.170131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.038 ms 00:21:05.901 [2024-12-06 04:11:53.170139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:53.193364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:53.193504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:05.901 [2024-12-06 04:11:53.193526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.165 ms 00:21:05.901 [2024-12-06 04:11:53.193534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.901 [2024-12-06 04:11:53.194128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.901 [2024-12-06 04:11:53.194147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:05.901 [2024-12-06 04:11:53.194158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:21:05.902 [2024-12-06 04:11:53.194165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.902 [2024-12-06 04:11:53.262843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.902 [2024-12-06 04:11:53.262890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:05.902 [2024-12-06 04:11:53.262908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.629 ms 00:21:05.902 [2024-12-06 04:11:53.262916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.902 [2024-12-06 04:11:53.287254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.902 [2024-12-06 04:11:53.287295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:05.902 [2024-12-06 04:11:53.287309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.238 ms 00:21:05.902 [2024-12-06 04:11:53.287317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.902 [2024-12-06 04:11:53.310919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.902 [2024-12-06 04:11:53.311054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:05.902 [2024-12-06 04:11:53.311073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.537 ms 00:21:05.902 [2024-12-06 04:11:53.311080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.902 [2024-12-06 04:11:53.335165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.902 [2024-12-06 04:11:53.335217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:05.902 [2024-12-06 04:11:53.335230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.012 ms 00:21:05.902 [2024-12-06 04:11:53.335238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.902 [2024-12-06 04:11:53.335303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.902 [2024-12-06 04:11:53.335315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:05.902 [2024-12-06 04:11:53.335328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:05.902 [2024-12-06 04:11:53.335335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.902 [2024-12-06 04:11:53.335406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.902 [2024-12-06 04:11:53.335415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:05.902 [2024-12-06 04:11:53.335424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:05.902 [2024-12-06 04:11:53.335432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.902 [2024-12-06 04:11:53.336479] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:05.902 [2024-12-06 04:11:53.339556] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2731.244 ms, result 0 00:21:05.902 [2024-12-06 04:11:53.340248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:05.902 { 00:21:05.902 "name": "ftl0", 00:21:05.902 "uuid": "35e3cd2c-a5a2-441a-aebe-c05fd677fe36" 00:21:05.902 } 00:21:05.902 04:11:53 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:05.902 04:11:53 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:05.902 04:11:53 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:05.902 04:11:53 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:21:05.902 04:11:53 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:05.902 04:11:53 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:05.902 04:11:53 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:06.160 04:11:53 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:06.417 [ 00:21:06.417 { 00:21:06.417 "name": "ftl0", 00:21:06.417 "aliases": [ 00:21:06.417 "35e3cd2c-a5a2-441a-aebe-c05fd677fe36" 00:21:06.417 ], 00:21:06.417 "product_name": "FTL disk", 00:21:06.417 "block_size": 4096, 00:21:06.417 "num_blocks": 23592960, 00:21:06.417 "uuid": "35e3cd2c-a5a2-441a-aebe-c05fd677fe36", 00:21:06.417 "assigned_rate_limits": { 00:21:06.418 "rw_ios_per_sec": 0, 00:21:06.418 "rw_mbytes_per_sec": 0, 00:21:06.418 "r_mbytes_per_sec": 0, 00:21:06.418 "w_mbytes_per_sec": 0 00:21:06.418 }, 00:21:06.418 "claimed": false, 00:21:06.418 "zoned": false, 00:21:06.418 "supported_io_types": { 00:21:06.418 "read": true, 00:21:06.418 "write": true, 00:21:06.418 "unmap": true, 00:21:06.418 "flush": true, 00:21:06.418 "reset": false, 00:21:06.418 "nvme_admin": false, 00:21:06.418 "nvme_io": false, 00:21:06.418 "nvme_io_md": false, 00:21:06.418 "write_zeroes": true, 00:21:06.418 "zcopy": false, 00:21:06.418 "get_zone_info": false, 00:21:06.418 "zone_management": false, 00:21:06.418 "zone_append": false, 00:21:06.418 "compare": false, 00:21:06.418 "compare_and_write": false, 00:21:06.418 "abort": false, 00:21:06.418 "seek_hole": false, 00:21:06.418 "seek_data": false, 00:21:06.418 "copy": false, 00:21:06.418 "nvme_iov_md": false 00:21:06.418 }, 00:21:06.418 "driver_specific": { 00:21:06.418 "ftl": { 00:21:06.418 "base_bdev": "f60bad50-d738-4eaa-a035-a620a7eefc48", 00:21:06.418 "cache": "nvc0n1p0" 00:21:06.418 } 00:21:06.418 } 00:21:06.418 } 00:21:06.418 ] 00:21:06.418 04:11:53 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:21:06.418 04:11:53 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:06.418 04:11:53 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:06.675 04:11:53 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:06.675 04:11:53 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:06.675 04:11:54 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:06.675 { 00:21:06.675 "name": "ftl0", 00:21:06.675 "aliases": [ 00:21:06.675 "35e3cd2c-a5a2-441a-aebe-c05fd677fe36" 00:21:06.675 ], 00:21:06.675 "product_name": "FTL disk", 00:21:06.675 "block_size": 4096, 00:21:06.675 "num_blocks": 23592960, 00:21:06.675 "uuid": "35e3cd2c-a5a2-441a-aebe-c05fd677fe36", 00:21:06.675 "assigned_rate_limits": { 00:21:06.675 "rw_ios_per_sec": 0, 00:21:06.675 "rw_mbytes_per_sec": 0, 00:21:06.675 "r_mbytes_per_sec": 0, 00:21:06.675 "w_mbytes_per_sec": 0 00:21:06.675 }, 00:21:06.675 "claimed": false, 00:21:06.675 "zoned": false, 00:21:06.675 "supported_io_types": { 00:21:06.675 "read": true, 00:21:06.675 "write": true, 00:21:06.675 "unmap": true, 00:21:06.675 "flush": true, 00:21:06.675 "reset": false, 00:21:06.675 "nvme_admin": false, 00:21:06.675 "nvme_io": false, 00:21:06.675 "nvme_io_md": false, 00:21:06.675 "write_zeroes": true, 00:21:06.675 "zcopy": false, 00:21:06.675 "get_zone_info": false, 00:21:06.675 "zone_management": false, 00:21:06.675 "zone_append": false, 00:21:06.675 "compare": false, 00:21:06.675 "compare_and_write": false, 00:21:06.675 "abort": false, 00:21:06.675 "seek_hole": false, 00:21:06.675 "seek_data": false, 00:21:06.675 "copy": false, 00:21:06.675 "nvme_iov_md": false 00:21:06.675 }, 00:21:06.675 "driver_specific": { 00:21:06.675 "ftl": { 00:21:06.675 "base_bdev": "f60bad50-d738-4eaa-a035-a620a7eefc48", 00:21:06.675 "cache": "nvc0n1p0" 00:21:06.675 } 00:21:06.675 } 00:21:06.675 } 00:21:06.675 ]' 00:21:06.675 04:11:54 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:06.932 04:11:54 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:06.932 04:11:54 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:06.932 [2024-12-06 04:11:54.387833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.932 [2024-12-06 04:11:54.387885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:06.932 [2024-12-06 04:11:54.387901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:06.932 [2024-12-06 04:11:54.387913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.932 [2024-12-06 04:11:54.387947] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:06.932 [2024-12-06 04:11:54.390521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.932 [2024-12-06 04:11:54.390653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:06.932 [2024-12-06 04:11:54.390676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.556 ms 00:21:06.932 [2024-12-06 04:11:54.390684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.932 [2024-12-06 04:11:54.391211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.932 [2024-12-06 04:11:54.391239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:06.932 [2024-12-06 04:11:54.391256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:21:06.932 [2024-12-06 04:11:54.391268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.932 [2024-12-06 04:11:54.395137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.932 [2024-12-06 04:11:54.395172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:06.932 [2024-12-06 04:11:54.395184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.833 ms 00:21:06.932 [2024-12-06 04:11:54.395192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.932 [2024-12-06 04:11:54.402342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.932 [2024-12-06 04:11:54.402371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:06.932 [2024-12-06 04:11:54.402383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.103 ms 00:21:06.932 [2024-12-06 04:11:54.402392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.932 [2024-12-06 04:11:54.425807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.932 [2024-12-06 04:11:54.425842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:06.932 [2024-12-06 04:11:54.425856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.335 ms 00:21:06.932 [2024-12-06 04:11:54.425864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.932 [2024-12-06 04:11:54.440516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.932 [2024-12-06 04:11:54.440551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:06.932 [2024-12-06 04:11:54.440566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.588 ms 00:21:06.932 [2024-12-06 04:11:54.440575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.932 [2024-12-06 04:11:54.440785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.932 [2024-12-06 04:11:54.440797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:06.932 [2024-12-06 04:11:54.440807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:21:06.932 [2024-12-06 04:11:54.440815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.190 [2024-12-06 04:11:54.463259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.190 [2024-12-06 04:11:54.463292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:07.190 [2024-12-06 04:11:54.463304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.417 ms 00:21:07.190 [2024-12-06 04:11:54.463312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.190 [2024-12-06 04:11:54.485495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.190 [2024-12-06 04:11:54.485622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:07.190 [2024-12-06 04:11:54.485643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.127 ms 00:21:07.190 [2024-12-06 04:11:54.485652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.190 [2024-12-06 04:11:54.507745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.190 [2024-12-06 04:11:54.507859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:07.190 [2024-12-06 04:11:54.507878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.036 ms 00:21:07.190 [2024-12-06 04:11:54.507885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.190 [2024-12-06 04:11:54.530071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.190 [2024-12-06 04:11:54.530104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:07.190 [2024-12-06 04:11:54.530115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.083 ms 00:21:07.190 [2024-12-06 04:11:54.530122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.190 [2024-12-06 04:11:54.530180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:07.190 [2024-12-06 04:11:54.530195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.530994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:07.190 [2024-12-06 04:11:54.531229] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:07.190 [2024-12-06 04:11:54.531242] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35e3cd2c-a5a2-441a-aebe-c05fd677fe36 00:21:07.190 [2024-12-06 04:11:54.531249] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:07.190 [2024-12-06 04:11:54.531257] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:07.190 [2024-12-06 04:11:54.531264] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:07.190 [2024-12-06 04:11:54.531275] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:07.190 [2024-12-06 04:11:54.531282] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:07.190 [2024-12-06 04:11:54.531291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:07.190 [2024-12-06 04:11:54.531297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:07.190 [2024-12-06 04:11:54.531305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:07.190 [2024-12-06 04:11:54.531311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:07.190 [2024-12-06 04:11:54.531320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.190 [2024-12-06 04:11:54.531327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:07.190 [2024-12-06 04:11:54.531337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:21:07.190 [2024-12-06 04:11:54.531344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.190 [2024-12-06 04:11:54.543881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.190 [2024-12-06 04:11:54.543916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:07.190 [2024-12-06 04:11:54.543929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.495 ms 00:21:07.190 [2024-12-06 04:11:54.543937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.190 [2024-12-06 04:11:54.544310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.190 [2024-12-06 04:11:54.544325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:07.190 [2024-12-06 04:11:54.544336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:21:07.190 [2024-12-06 04:11:54.544343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.190 [2024-12-06 04:11:54.588010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.190 [2024-12-06 04:11:54.588050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:07.190 [2024-12-06 04:11:54.588063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.190 [2024-12-06 04:11:54.588072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.190 [2024-12-06 04:11:54.588193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.190 [2024-12-06 04:11:54.588203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.190 [2024-12-06 04:11:54.588214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.191 [2024-12-06 04:11:54.588223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.191 [2024-12-06 04:11:54.588283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.191 [2024-12-06 04:11:54.588293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.191 [2024-12-06 04:11:54.588308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.191 [2024-12-06 04:11:54.588316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.191 [2024-12-06 04:11:54.588345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.191 [2024-12-06 04:11:54.588354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.191 [2024-12-06 04:11:54.588364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.191 [2024-12-06 04:11:54.588373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.191 [2024-12-06 04:11:54.669979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.191 [2024-12-06 04:11:54.670156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.191 [2024-12-06 04:11:54.670177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.191 [2024-12-06 04:11:54.670186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.450 [2024-12-06 04:11:54.733236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.450 [2024-12-06 04:11:54.733399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.450 [2024-12-06 04:11:54.733419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.450 [2024-12-06 04:11:54.733428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.450 [2024-12-06 04:11:54.733532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.450 [2024-12-06 04:11:54.733542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.450 [2024-12-06 04:11:54.733554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.450 [2024-12-06 04:11:54.733564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.450 [2024-12-06 04:11:54.733609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.450 [2024-12-06 04:11:54.733617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.450 [2024-12-06 04:11:54.733626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.450 [2024-12-06 04:11:54.733633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.450 [2024-12-06 04:11:54.733768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.450 [2024-12-06 04:11:54.733779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.450 [2024-12-06 04:11:54.733788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.450 [2024-12-06 04:11:54.733797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.450 [2024-12-06 04:11:54.733845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.450 [2024-12-06 04:11:54.733854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:07.450 [2024-12-06 04:11:54.733864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.450 [2024-12-06 04:11:54.733871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.450 [2024-12-06 04:11:54.733924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.450 [2024-12-06 04:11:54.733932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.450 [2024-12-06 04:11:54.733942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.450 [2024-12-06 04:11:54.733950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.450 [2024-12-06 04:11:54.734004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.450 [2024-12-06 04:11:54.734013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.450 [2024-12-06 04:11:54.734022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.450 [2024-12-06 04:11:54.734029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.450 [2024-12-06 04:11:54.734195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.345 ms, result 0 00:21:07.450 true 00:21:07.450 04:11:54 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76222 00:21:07.450 04:11:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76222 ']' 00:21:07.450 04:11:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76222 00:21:07.450 04:11:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:07.450 04:11:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.450 04:11:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76222 00:21:07.450 04:11:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.450 killing process with pid 76222 00:21:07.450 04:11:54 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.450 04:11:54 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76222' 00:21:07.450 04:11:54 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76222 00:21:07.450 04:11:54 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76222 00:21:14.067 04:12:01 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:15.001 65536+0 records in 00:21:15.001 65536+0 records out 00:21:15.001 268435456 bytes (268 MB, 256 MiB) copied, 1.06934 s, 251 MB/s 00:21:15.001 04:12:02 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:15.001 [2024-12-06 04:12:02.384999] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:21:15.001 [2024-12-06 04:12:02.385121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76400 ] 00:21:15.259 [2024-12-06 04:12:02.543656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.259 [2024-12-06 04:12:02.649909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.517 [2024-12-06 04:12:02.907393] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:15.517 [2024-12-06 04:12:02.907612] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:15.777 [2024-12-06 04:12:03.066466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.777 [2024-12-06 04:12:03.066539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:15.777 [2024-12-06 04:12:03.066553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:15.777 [2024-12-06 04:12:03.066563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.777 [2024-12-06 04:12:03.069195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.777 [2024-12-06 04:12:03.069328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:15.777 [2024-12-06 04:12:03.069345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.614 ms 00:21:15.777 [2024-12-06 04:12:03.069352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.777 [2024-12-06 04:12:03.069418] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:15.777 [2024-12-06 04:12:03.070095] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:15.777 [2024-12-06 04:12:03.070115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.777 [2024-12-06 04:12:03.070123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:15.777 [2024-12-06 04:12:03.070131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:21:15.777 [2024-12-06 04:12:03.070139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.777 [2024-12-06 04:12:03.071552] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:15.777 [2024-12-06 04:12:03.084619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.777 [2024-12-06 04:12:03.084771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:15.777 [2024-12-06 04:12:03.084791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.069 ms 00:21:15.777 [2024-12-06 04:12:03.084799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.777 [2024-12-06 04:12:03.084882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.777 [2024-12-06 04:12:03.084894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:15.777 [2024-12-06 04:12:03.084902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:15.777 [2024-12-06 04:12:03.084910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.777 [2024-12-06 04:12:03.089667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.777 [2024-12-06 04:12:03.089697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:15.777 [2024-12-06 04:12:03.089706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:21:15.777 [2024-12-06 04:12:03.089713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.777 [2024-12-06 04:12:03.089818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.777 [2024-12-06 04:12:03.089827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:15.777 [2024-12-06 04:12:03.089836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:15.778 [2024-12-06 04:12:03.089844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.778 [2024-12-06 04:12:03.089870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.778 [2024-12-06 04:12:03.089878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:15.778 [2024-12-06 04:12:03.089886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:15.778 [2024-12-06 04:12:03.089893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.778 [2024-12-06 04:12:03.089914] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:15.778 [2024-12-06 04:12:03.093110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.778 [2024-12-06 04:12:03.093135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:15.778 [2024-12-06 04:12:03.093144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:21:15.778 [2024-12-06 04:12:03.093151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.778 [2024-12-06 04:12:03.093186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.778 [2024-12-06 04:12:03.093195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:15.778 [2024-12-06 04:12:03.093202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:15.778 [2024-12-06 04:12:03.093209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.778 [2024-12-06 04:12:03.093228] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:15.778 [2024-12-06 04:12:03.093246] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:15.778 [2024-12-06 04:12:03.093280] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:15.778 [2024-12-06 04:12:03.093295] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:15.778 [2024-12-06 04:12:03.093396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:15.778 [2024-12-06 04:12:03.093406] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:15.778 [2024-12-06 04:12:03.093417] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:15.778 [2024-12-06 04:12:03.093429] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:15.778 [2024-12-06 04:12:03.093437] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:15.778 [2024-12-06 04:12:03.093446] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:15.778 [2024-12-06 04:12:03.093453] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:15.778 [2024-12-06 04:12:03.093460] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:15.778 [2024-12-06 04:12:03.093467] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:15.778 [2024-12-06 04:12:03.093474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.778 [2024-12-06 04:12:03.093481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:15.778 [2024-12-06 04:12:03.093489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:21:15.778 [2024-12-06 04:12:03.093496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.778 [2024-12-06 04:12:03.093583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.778 [2024-12-06 04:12:03.093594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:15.778 [2024-12-06 04:12:03.093602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:15.778 [2024-12-06 04:12:03.093608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.778 [2024-12-06 04:12:03.093708] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:15.778 [2024-12-06 04:12:03.093728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:15.778 [2024-12-06 04:12:03.093737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:15.778 [2024-12-06 04:12:03.093744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.778 [2024-12-06 04:12:03.093752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:15.778 [2024-12-06 04:12:03.093758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:15.778 [2024-12-06 04:12:03.093765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:15.778 [2024-12-06 04:12:03.093773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:15.778 [2024-12-06 04:12:03.093779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:15.778 [2024-12-06 04:12:03.093786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:15.778 [2024-12-06 04:12:03.093793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:15.778 [2024-12-06 04:12:03.093805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:15.778 [2024-12-06 04:12:03.093814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:15.778 [2024-12-06 04:12:03.093821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:15.778 [2024-12-06 04:12:03.093827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:15.778 [2024-12-06 04:12:03.093833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.778 [2024-12-06 04:12:03.093840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:15.778 [2024-12-06 04:12:03.093847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:15.778 [2024-12-06 04:12:03.093853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.778 [2024-12-06 04:12:03.093860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:15.778 [2024-12-06 04:12:03.093867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:15.778 [2024-12-06 04:12:03.093873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.778 [2024-12-06 04:12:03.093880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:15.778 [2024-12-06 04:12:03.093886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:15.778 [2024-12-06 04:12:03.093892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.778 [2024-12-06 04:12:03.093899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:15.778 [2024-12-06 04:12:03.093906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:15.778 [2024-12-06 04:12:03.093912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.778 [2024-12-06 04:12:03.093919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:15.778 [2024-12-06 04:12:03.093925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:15.778 [2024-12-06 04:12:03.093932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.778 [2024-12-06 04:12:03.093938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:15.778 [2024-12-06 04:12:03.093944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:15.778 [2024-12-06 04:12:03.093950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:15.778 [2024-12-06 04:12:03.093957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:15.778 [2024-12-06 04:12:03.093963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:15.778 [2024-12-06 04:12:03.093969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:15.778 [2024-12-06 04:12:03.093975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:15.778 [2024-12-06 04:12:03.093982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:15.778 [2024-12-06 04:12:03.093988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.778 [2024-12-06 04:12:03.093994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:15.778 [2024-12-06 04:12:03.094001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:15.778 [2024-12-06 04:12:03.094007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.778 [2024-12-06 04:12:03.094014] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:15.778 [2024-12-06 04:12:03.094023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:15.778 [2024-12-06 04:12:03.094032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:15.778 [2024-12-06 04:12:03.094039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.778 [2024-12-06 04:12:03.094046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:15.778 [2024-12-06 04:12:03.094052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:15.778 [2024-12-06 04:12:03.094059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:15.778 [2024-12-06 04:12:03.094065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:15.778 [2024-12-06 04:12:03.094071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:15.778 [2024-12-06 04:12:03.094078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:15.778 [2024-12-06 04:12:03.094086] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:15.778 [2024-12-06 04:12:03.094094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:15.778 [2024-12-06 04:12:03.094103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:15.778 [2024-12-06 04:12:03.094110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:15.778 [2024-12-06 04:12:03.094117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:15.778 [2024-12-06 04:12:03.094124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:15.778 [2024-12-06 04:12:03.094131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:15.778 [2024-12-06 04:12:03.094138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:15.778 [2024-12-06 04:12:03.094145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:15.778 [2024-12-06 04:12:03.094151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:15.778 [2024-12-06 04:12:03.094159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:15.778 [2024-12-06 04:12:03.094165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:15.778 [2024-12-06 04:12:03.094172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:15.779 [2024-12-06 04:12:03.094179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:15.779 [2024-12-06 04:12:03.094185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:15.779 [2024-12-06 04:12:03.094193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:15.779 [2024-12-06 04:12:03.094199] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:15.779 [2024-12-06 04:12:03.094207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:15.779 [2024-12-06 04:12:03.094215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:15.779 [2024-12-06 04:12:03.094223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:15.779 [2024-12-06 04:12:03.094230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:15.779 [2024-12-06 04:12:03.094237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:15.779 [2024-12-06 04:12:03.094244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.094255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:15.779 [2024-12-06 04:12:03.094262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:21:15.779 [2024-12-06 04:12:03.094269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.119728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.119857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:15.779 [2024-12-06 04:12:03.119872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.394 ms 00:21:15.779 [2024-12-06 04:12:03.119880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.120004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.120015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:15.779 [2024-12-06 04:12:03.120023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:15.779 [2024-12-06 04:12:03.120030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.157685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.157845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:15.779 [2024-12-06 04:12:03.157868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.634 ms 00:21:15.779 [2024-12-06 04:12:03.157879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.157973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.157985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:15.779 [2024-12-06 04:12:03.157995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:15.779 [2024-12-06 04:12:03.158004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.158319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.158345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:15.779 [2024-12-06 04:12:03.158362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:21:15.779 [2024-12-06 04:12:03.158370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.158504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.158519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:15.779 [2024-12-06 04:12:03.158527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:21:15.779 [2024-12-06 04:12:03.158534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.171745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.171775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:15.779 [2024-12-06 04:12:03.171785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.191 ms 00:21:15.779 [2024-12-06 04:12:03.171793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.184548] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:15.779 [2024-12-06 04:12:03.184582] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:15.779 [2024-12-06 04:12:03.184594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.184601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:15.779 [2024-12-06 04:12:03.184610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.708 ms 00:21:15.779 [2024-12-06 04:12:03.184616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.209295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.209327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:15.779 [2024-12-06 04:12:03.209338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.612 ms 00:21:15.779 [2024-12-06 04:12:03.209347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.221255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.221284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:15.779 [2024-12-06 04:12:03.221294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.840 ms 00:21:15.779 [2024-12-06 04:12:03.221301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.232999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.233028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:15.779 [2024-12-06 04:12:03.233037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.638 ms 00:21:15.779 [2024-12-06 04:12:03.233044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.233637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.233659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:15.779 [2024-12-06 04:12:03.233669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:21:15.779 [2024-12-06 04:12:03.233676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.290135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.779 [2024-12-06 04:12:03.290186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:15.779 [2024-12-06 04:12:03.290201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.437 ms 00:21:15.779 [2024-12-06 04:12:03.290209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-12-06 04:12:03.300419] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:16.038 [2024-12-06 04:12:03.314314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.038 [2024-12-06 04:12:03.314354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:16.038 [2024-12-06 04:12:03.314365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.014 ms 00:21:16.038 [2024-12-06 04:12:03.314373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.038 [2024-12-06 04:12:03.314457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.038 [2024-12-06 04:12:03.314475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:16.038 [2024-12-06 04:12:03.314484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:16.038 [2024-12-06 04:12:03.314492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.038 [2024-12-06 04:12:03.314539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.038 [2024-12-06 04:12:03.314548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:16.038 [2024-12-06 04:12:03.314556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:16.038 [2024-12-06 04:12:03.314563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.038 [2024-12-06 04:12:03.314592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.038 [2024-12-06 04:12:03.314602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:16.038 [2024-12-06 04:12:03.314610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:16.038 [2024-12-06 04:12:03.314617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.038 [2024-12-06 04:12:03.314649] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:16.038 [2024-12-06 04:12:03.314658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.038 [2024-12-06 04:12:03.314665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:16.038 [2024-12-06 04:12:03.314673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:16.038 [2024-12-06 04:12:03.314680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.038 [2024-12-06 04:12:03.338240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.038 [2024-12-06 04:12:03.338363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:16.038 [2024-12-06 04:12:03.338380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.541 ms 00:21:16.038 [2024-12-06 04:12:03.338388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.038 [2024-12-06 04:12:03.338477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.038 [2024-12-06 04:12:03.338488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:16.038 [2024-12-06 04:12:03.338497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:16.038 [2024-12-06 04:12:03.338504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.038 [2024-12-06 04:12:03.339354] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:16.038 [2024-12-06 04:12:03.342344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.623 ms, result 0 00:21:16.038 [2024-12-06 04:12:03.343741] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:16.038 [2024-12-06 04:12:03.356404] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:16.971  [2024-12-06T04:12:05.441Z] Copying: 24/256 [MB] (24 MBps) [2024-12-06T04:12:06.374Z] Copying: 48/256 [MB] (24 MBps) [2024-12-06T04:12:07.750Z] Copying: 76/256 [MB] (28 MBps) [2024-12-06T04:12:08.684Z] Copying: 88/256 [MB] (11 MBps) [2024-12-06T04:12:09.618Z] Copying: 108/256 [MB] (19 MBps) [2024-12-06T04:12:10.552Z] Copying: 130/256 [MB] (22 MBps) [2024-12-06T04:12:11.573Z] Copying: 152/256 [MB] (21 MBps) [2024-12-06T04:12:12.517Z] Copying: 168/256 [MB] (16 MBps) [2024-12-06T04:12:13.462Z] Copying: 183/256 [MB] (14 MBps) [2024-12-06T04:12:14.405Z] Copying: 199/256 [MB] (15 MBps) [2024-12-06T04:12:15.789Z] Copying: 211/256 [MB] (11 MBps) [2024-12-06T04:12:16.362Z] Copying: 225216/262144 [kB] (9072 kBps) [2024-12-06T04:12:17.736Z] Copying: 233152/262144 [kB] (7936 kBps) [2024-12-06T04:12:18.669Z] Copying: 238/256 [MB] (11 MBps) [2024-12-06T04:12:19.268Z] Copying: 248/256 [MB] (10 MBps) [2024-12-06T04:12:19.268Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-06 04:12:19.010621] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:31.741 [2024-12-06 04:12:19.019888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.741 [2024-12-06 04:12:19.019926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:31.741 [2024-12-06 04:12:19.019939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:31.741 [2024-12-06 04:12:19.019954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.741 [2024-12-06 04:12:19.019976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:31.741 [2024-12-06 04:12:19.022644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.741 [2024-12-06 04:12:19.022674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:31.741 [2024-12-06 04:12:19.022685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.654 ms 00:21:31.741 [2024-12-06 04:12:19.022693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.741 [2024-12-06 04:12:19.025139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.741 [2024-12-06 04:12:19.025257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:31.741 [2024-12-06 04:12:19.025272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:21:31.741 [2024-12-06 04:12:19.025280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.741 [2024-12-06 04:12:19.033513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.741 [2024-12-06 04:12:19.033553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:31.741 [2024-12-06 04:12:19.033563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.214 ms 00:21:31.741 [2024-12-06 04:12:19.033570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.741 [2024-12-06 04:12:19.040768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.741 [2024-12-06 04:12:19.040797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:31.741 [2024-12-06 04:12:19.040807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.153 ms 00:21:31.741 [2024-12-06 04:12:19.040815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.741 [2024-12-06 04:12:19.064516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.741 [2024-12-06 04:12:19.064658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:31.741 [2024-12-06 04:12:19.064674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.660 ms 00:21:31.741 [2024-12-06 04:12:19.064682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.741 [2024-12-06 04:12:19.079582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.741 [2024-12-06 04:12:19.079738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:31.741 [2024-12-06 04:12:19.079761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.849 ms 00:21:31.741 [2024-12-06 04:12:19.079768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.741 [2024-12-06 04:12:19.079935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.741 [2024-12-06 04:12:19.079947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:31.741 [2024-12-06 04:12:19.079955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:21:31.741 [2024-12-06 04:12:19.079969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.741 [2024-12-06 04:12:19.104365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.741 [2024-12-06 04:12:19.104401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:31.741 [2024-12-06 04:12:19.104411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.379 ms 00:21:31.741 [2024-12-06 04:12:19.104418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.741 [2024-12-06 04:12:19.128679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.741 [2024-12-06 04:12:19.128735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:31.741 [2024-12-06 04:12:19.128746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.219 ms 00:21:31.741 [2024-12-06 04:12:19.128753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.741 [2024-12-06 04:12:19.152987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.741 [2024-12-06 04:12:19.153047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:31.741 [2024-12-06 04:12:19.153059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.187 ms 00:21:31.741 [2024-12-06 04:12:19.153066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.741 [2024-12-06 04:12:19.178094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.741 [2024-12-06 04:12:19.178143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:31.741 [2024-12-06 04:12:19.178155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.948 ms 00:21:31.741 [2024-12-06 04:12:19.178162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.741 [2024-12-06 04:12:19.178212] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:31.741 [2024-12-06 04:12:19.178228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:31.741 [2024-12-06 04:12:19.178392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.178992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.179000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.179019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.179027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.179035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.179042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.179050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.179058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:31.742 [2024-12-06 04:12:19.179075] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:31.742 [2024-12-06 04:12:19.179084] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35e3cd2c-a5a2-441a-aebe-c05fd677fe36 00:21:31.742 [2024-12-06 04:12:19.179093] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:31.742 [2024-12-06 04:12:19.179101] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:31.742 [2024-12-06 04:12:19.179108] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:31.742 [2024-12-06 04:12:19.179117] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:31.742 [2024-12-06 04:12:19.179124] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:31.742 [2024-12-06 04:12:19.179132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:31.742 [2024-12-06 04:12:19.179139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:31.742 [2024-12-06 04:12:19.179146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:31.742 [2024-12-06 04:12:19.179152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:31.742 [2024-12-06 04:12:19.179160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.742 [2024-12-06 04:12:19.179171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:31.742 [2024-12-06 04:12:19.179181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:21:31.743 [2024-12-06 04:12:19.179189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.743 [2024-12-06 04:12:19.192808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.743 [2024-12-06 04:12:19.192854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:31.743 [2024-12-06 04:12:19.192865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.584 ms 00:21:31.743 [2024-12-06 04:12:19.192874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.743 [2024-12-06 04:12:19.193279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.743 [2024-12-06 04:12:19.193290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:31.743 [2024-12-06 04:12:19.193299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:21:31.743 [2024-12-06 04:12:19.193306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.743 [2024-12-06 04:12:19.232509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.743 [2024-12-06 04:12:19.232563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:31.743 [2024-12-06 04:12:19.232575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.743 [2024-12-06 04:12:19.232584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.743 [2024-12-06 04:12:19.232702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.743 [2024-12-06 04:12:19.232713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:31.743 [2024-12-06 04:12:19.232753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.743 [2024-12-06 04:12:19.232761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.743 [2024-12-06 04:12:19.232818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.743 [2024-12-06 04:12:19.232828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:31.743 [2024-12-06 04:12:19.232837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.743 [2024-12-06 04:12:19.232846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.743 [2024-12-06 04:12:19.232865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.743 [2024-12-06 04:12:19.232878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:31.743 [2024-12-06 04:12:19.232887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.743 [2024-12-06 04:12:19.232895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.044 [2024-12-06 04:12:19.317448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.044 [2024-12-06 04:12:19.317515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:32.044 [2024-12-06 04:12:19.317529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.044 [2024-12-06 04:12:19.317537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.044 [2024-12-06 04:12:19.386482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.044 [2024-12-06 04:12:19.386544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:32.044 [2024-12-06 04:12:19.386557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.044 [2024-12-06 04:12:19.386566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.044 [2024-12-06 04:12:19.386654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.044 [2024-12-06 04:12:19.386665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:32.044 [2024-12-06 04:12:19.386674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.044 [2024-12-06 04:12:19.386682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.044 [2024-12-06 04:12:19.386747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.044 [2024-12-06 04:12:19.386757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:32.044 [2024-12-06 04:12:19.386773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.044 [2024-12-06 04:12:19.386781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.045 [2024-12-06 04:12:19.386899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.045 [2024-12-06 04:12:19.386911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:32.045 [2024-12-06 04:12:19.386919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.045 [2024-12-06 04:12:19.386927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.045 [2024-12-06 04:12:19.386962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.045 [2024-12-06 04:12:19.386972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:32.045 [2024-12-06 04:12:19.386981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.045 [2024-12-06 04:12:19.386992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.045 [2024-12-06 04:12:19.387037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.045 [2024-12-06 04:12:19.387048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:32.045 [2024-12-06 04:12:19.387056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.045 [2024-12-06 04:12:19.387064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.045 [2024-12-06 04:12:19.387113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.045 [2024-12-06 04:12:19.387123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:32.045 [2024-12-06 04:12:19.387136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.045 [2024-12-06 04:12:19.387145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.045 [2024-12-06 04:12:19.387304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.397 ms, result 0 00:21:32.982 00:21:32.982 00:21:32.982 04:12:20 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76591 00:21:32.982 04:12:20 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76591 00:21:32.982 04:12:20 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76591 ']' 00:21:32.982 04:12:20 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.982 04:12:20 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:32.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.982 04:12:20 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.982 04:12:20 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.982 04:12:20 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.982 04:12:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:32.982 [2024-12-06 04:12:20.250568] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:21:32.982 [2024-12-06 04:12:20.250693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76591 ] 00:21:32.982 [2024-12-06 04:12:20.412430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.243 [2024-12-06 04:12:20.515258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.815 04:12:21 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.815 04:12:21 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:33.815 04:12:21 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:34.076 [2024-12-06 04:12:21.402353] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:34.076 [2024-12-06 04:12:21.402707] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:34.076 [2024-12-06 04:12:21.582340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.076 [2024-12-06 04:12:21.582413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:34.076 [2024-12-06 04:12:21.582435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:34.076 [2024-12-06 04:12:21.582446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.076 [2024-12-06 04:12:21.585501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.076 [2024-12-06 04:12:21.585735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:34.076 [2024-12-06 04:12:21.585765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.031 ms 00:21:34.076 [2024-12-06 04:12:21.585775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.076 [2024-12-06 04:12:21.585912] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:34.076 [2024-12-06 04:12:21.586630] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:34.076 [2024-12-06 04:12:21.586660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.076 [2024-12-06 04:12:21.586670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:34.076 [2024-12-06 04:12:21.586683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:21:34.076 [2024-12-06 04:12:21.586692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.076 [2024-12-06 04:12:21.588692] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:34.337 [2024-12-06 04:12:21.603177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.337 [2024-12-06 04:12:21.603235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:34.337 [2024-12-06 04:12:21.603250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.492 ms 00:21:34.337 [2024-12-06 04:12:21.603260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.337 [2024-12-06 04:12:21.603379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.337 [2024-12-06 04:12:21.603394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:34.337 [2024-12-06 04:12:21.603403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:34.337 [2024-12-06 04:12:21.603413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.337 [2024-12-06 04:12:21.612428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.337 [2024-12-06 04:12:21.612486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:34.337 [2024-12-06 04:12:21.612498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.960 ms 00:21:34.337 [2024-12-06 04:12:21.612508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.337 [2024-12-06 04:12:21.612629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.337 [2024-12-06 04:12:21.612642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:34.337 [2024-12-06 04:12:21.612652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:34.337 [2024-12-06 04:12:21.612665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.337 [2024-12-06 04:12:21.612691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.337 [2024-12-06 04:12:21.612702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:34.337 [2024-12-06 04:12:21.612710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:34.337 [2024-12-06 04:12:21.612757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.337 [2024-12-06 04:12:21.612783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:34.337 [2024-12-06 04:12:21.617082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.337 [2024-12-06 04:12:21.617125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:34.338 [2024-12-06 04:12:21.617139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.303 ms 00:21:34.338 [2024-12-06 04:12:21.617147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.338 [2024-12-06 04:12:21.617231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.338 [2024-12-06 04:12:21.617241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:34.338 [2024-12-06 04:12:21.617254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:34.338 [2024-12-06 04:12:21.617265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.338 [2024-12-06 04:12:21.617288] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:34.338 [2024-12-06 04:12:21.617311] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:34.338 [2024-12-06 04:12:21.617361] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:34.338 [2024-12-06 04:12:21.617378] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:34.338 [2024-12-06 04:12:21.617487] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:34.338 [2024-12-06 04:12:21.617499] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:34.338 [2024-12-06 04:12:21.617515] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:34.338 [2024-12-06 04:12:21.617525] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:34.338 [2024-12-06 04:12:21.617538] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:34.338 [2024-12-06 04:12:21.617547] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:34.338 [2024-12-06 04:12:21.617557] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:34.338 [2024-12-06 04:12:21.617565] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:34.338 [2024-12-06 04:12:21.617576] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:34.338 [2024-12-06 04:12:21.617584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.338 [2024-12-06 04:12:21.617594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:34.338 [2024-12-06 04:12:21.617602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:21:34.338 [2024-12-06 04:12:21.617611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.338 [2024-12-06 04:12:21.617708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.338 [2024-12-06 04:12:21.617749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:34.338 [2024-12-06 04:12:21.617758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:34.338 [2024-12-06 04:12:21.617768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.338 [2024-12-06 04:12:21.617873] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:34.338 [2024-12-06 04:12:21.617886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:34.338 [2024-12-06 04:12:21.617895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.338 [2024-12-06 04:12:21.617906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.338 [2024-12-06 04:12:21.617914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:34.338 [2024-12-06 04:12:21.617926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:34.338 [2024-12-06 04:12:21.617933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:34.338 [2024-12-06 04:12:21.617946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:34.338 [2024-12-06 04:12:21.617954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:34.338 [2024-12-06 04:12:21.617963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.338 [2024-12-06 04:12:21.617970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:34.338 [2024-12-06 04:12:21.617978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:34.338 [2024-12-06 04:12:21.617985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.338 [2024-12-06 04:12:21.617994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:34.338 [2024-12-06 04:12:21.618001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:34.338 [2024-12-06 04:12:21.618010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.338 [2024-12-06 04:12:21.618017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:34.338 [2024-12-06 04:12:21.618026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:34.338 [2024-12-06 04:12:21.618040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.338 [2024-12-06 04:12:21.618051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:34.338 [2024-12-06 04:12:21.618059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:34.338 [2024-12-06 04:12:21.618068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.338 [2024-12-06 04:12:21.618084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:34.338 [2024-12-06 04:12:21.618096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:34.338 [2024-12-06 04:12:21.618103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.338 [2024-12-06 04:12:21.618112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:34.338 [2024-12-06 04:12:21.618118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:34.338 [2024-12-06 04:12:21.618126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.338 [2024-12-06 04:12:21.618133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:34.338 [2024-12-06 04:12:21.618143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:34.338 [2024-12-06 04:12:21.618150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.338 [2024-12-06 04:12:21.618159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:34.338 [2024-12-06 04:12:21.618166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:34.338 [2024-12-06 04:12:21.618175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.338 [2024-12-06 04:12:21.618182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:34.338 [2024-12-06 04:12:21.618190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:34.338 [2024-12-06 04:12:21.618198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.338 [2024-12-06 04:12:21.618207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:34.338 [2024-12-06 04:12:21.618213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:34.338 [2024-12-06 04:12:21.618224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.338 [2024-12-06 04:12:21.618231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:34.338 [2024-12-06 04:12:21.618239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:34.338 [2024-12-06 04:12:21.618245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.338 [2024-12-06 04:12:21.618254] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:34.338 [2024-12-06 04:12:21.618264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:34.338 [2024-12-06 04:12:21.618274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.338 [2024-12-06 04:12:21.618281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.338 [2024-12-06 04:12:21.618290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:34.338 [2024-12-06 04:12:21.618297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:34.338 [2024-12-06 04:12:21.618306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:34.338 [2024-12-06 04:12:21.618315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:34.338 [2024-12-06 04:12:21.618324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:34.338 [2024-12-06 04:12:21.618331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:34.338 [2024-12-06 04:12:21.618342] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:34.339 [2024-12-06 04:12:21.618351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.339 [2024-12-06 04:12:21.618365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:34.339 [2024-12-06 04:12:21.618373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:34.339 [2024-12-06 04:12:21.618381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:34.339 [2024-12-06 04:12:21.618389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:34.339 [2024-12-06 04:12:21.618399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:34.339 [2024-12-06 04:12:21.618406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:34.339 [2024-12-06 04:12:21.618416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:34.339 [2024-12-06 04:12:21.618423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:34.339 [2024-12-06 04:12:21.618433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:34.339 [2024-12-06 04:12:21.618440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:34.339 [2024-12-06 04:12:21.618449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:34.339 [2024-12-06 04:12:21.618456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:34.339 [2024-12-06 04:12:21.618465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:34.339 [2024-12-06 04:12:21.618498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:34.339 [2024-12-06 04:12:21.618509] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:34.339 [2024-12-06 04:12:21.618518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.339 [2024-12-06 04:12:21.618530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:34.339 [2024-12-06 04:12:21.618538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:34.339 [2024-12-06 04:12:21.618547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:34.339 [2024-12-06 04:12:21.618554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:34.339 [2024-12-06 04:12:21.618564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.618571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:34.339 [2024-12-06 04:12:21.618582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:21:34.339 [2024-12-06 04:12:21.618592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.652001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.652056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:34.339 [2024-12-06 04:12:21.652072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.345 ms 00:21:34.339 [2024-12-06 04:12:21.652083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.652225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.652236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:34.339 [2024-12-06 04:12:21.652247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:34.339 [2024-12-06 04:12:21.652256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.688190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.688242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.339 [2024-12-06 04:12:21.688257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.905 ms 00:21:34.339 [2024-12-06 04:12:21.688265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.688362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.688372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.339 [2024-12-06 04:12:21.688384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:34.339 [2024-12-06 04:12:21.688392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.688988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.689038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.339 [2024-12-06 04:12:21.689052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:21:34.339 [2024-12-06 04:12:21.689060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.689212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.689222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.339 [2024-12-06 04:12:21.689233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:21:34.339 [2024-12-06 04:12:21.689241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.707405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.707620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.339 [2024-12-06 04:12:21.707644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.137 ms 00:21:34.339 [2024-12-06 04:12:21.707653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.735684] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:34.339 [2024-12-06 04:12:21.735775] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:34.339 [2024-12-06 04:12:21.735800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.735811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:34.339 [2024-12-06 04:12:21.735828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.995 ms 00:21:34.339 [2024-12-06 04:12:21.735847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.762358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.762431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:34.339 [2024-12-06 04:12:21.762448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.382 ms 00:21:34.339 [2024-12-06 04:12:21.762457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.775463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.775677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:34.339 [2024-12-06 04:12:21.775709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.879 ms 00:21:34.339 [2024-12-06 04:12:21.775738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.788669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.788732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:34.339 [2024-12-06 04:12:21.788748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.837 ms 00:21:34.339 [2024-12-06 04:12:21.788756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.789467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.339 [2024-12-06 04:12:21.789503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:34.339 [2024-12-06 04:12:21.789516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:21:34.339 [2024-12-06 04:12:21.789524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.339 [2024-12-06 04:12:21.856338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.340 [2024-12-06 04:12:21.856432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:34.340 [2024-12-06 04:12:21.856453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.783 ms 00:21:34.340 [2024-12-06 04:12:21.856462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.598 [2024-12-06 04:12:21.867767] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:34.598 [2024-12-06 04:12:21.888374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.598 [2024-12-06 04:12:21.888441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:34.598 [2024-12-06 04:12:21.888459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.792 ms 00:21:34.598 [2024-12-06 04:12:21.888471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.598 [2024-12-06 04:12:21.888574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.598 [2024-12-06 04:12:21.888587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:34.598 [2024-12-06 04:12:21.888597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:34.598 [2024-12-06 04:12:21.888608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.598 [2024-12-06 04:12:21.888668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.598 [2024-12-06 04:12:21.888680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:34.599 [2024-12-06 04:12:21.888689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:34.599 [2024-12-06 04:12:21.888701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.599 [2024-12-06 04:12:21.888767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.599 [2024-12-06 04:12:21.888779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:34.599 [2024-12-06 04:12:21.888788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:34.599 [2024-12-06 04:12:21.888801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.599 [2024-12-06 04:12:21.888865] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:34.599 [2024-12-06 04:12:21.888881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.599 [2024-12-06 04:12:21.888893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:34.599 [2024-12-06 04:12:21.888905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:34.599 [2024-12-06 04:12:21.888913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.599 [2024-12-06 04:12:21.915459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.599 [2024-12-06 04:12:21.915659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:34.599 [2024-12-06 04:12:21.915686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.508 ms 00:21:34.599 [2024-12-06 04:12:21.915696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.599 [2024-12-06 04:12:21.915851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.599 [2024-12-06 04:12:21.915864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:34.599 [2024-12-06 04:12:21.915876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:34.599 [2024-12-06 04:12:21.915888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.599 [2024-12-06 04:12:21.917016] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:34.599 [2024-12-06 04:12:21.921186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 334.319 ms, result 0 00:21:34.599 [2024-12-06 04:12:21.923187] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:34.599 Some configs were skipped because the RPC state that can call them passed over. 00:21:34.599 04:12:21 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:34.856 [2024-12-06 04:12:22.193116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.856 [2024-12-06 04:12:22.193204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:34.856 [2024-12-06 04:12:22.193221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.490 ms 00:21:34.856 [2024-12-06 04:12:22.193231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.856 [2024-12-06 04:12:22.193268] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.654 ms, result 0 00:21:34.856 true 00:21:34.856 04:12:22 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:35.113 [2024-12-06 04:12:22.413092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.113 [2024-12-06 04:12:22.413142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:35.113 [2024-12-06 04:12:22.413155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:21:35.113 [2024-12-06 04:12:22.413162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.113 [2024-12-06 04:12:22.413197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.356 ms, result 0 00:21:35.113 true 00:21:35.113 04:12:22 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76591 00:21:35.113 04:12:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76591 ']' 00:21:35.113 04:12:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76591 00:21:35.113 04:12:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:35.113 04:12:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.113 04:12:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76591 00:21:35.113 killing process with pid 76591 00:21:35.113 04:12:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.113 04:12:22 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.113 04:12:22 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76591' 00:21:35.113 04:12:22 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76591 00:21:35.113 04:12:22 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76591 00:21:35.677 [2024-12-06 04:12:23.166060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.677 [2024-12-06 04:12:23.166121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:35.677 [2024-12-06 04:12:23.166135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:35.677 [2024-12-06 04:12:23.166144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.677 [2024-12-06 04:12:23.166167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:35.677 [2024-12-06 04:12:23.168770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.677 [2024-12-06 04:12:23.168802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:35.677 [2024-12-06 04:12:23.168817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:21:35.677 [2024-12-06 04:12:23.168826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.677 [2024-12-06 04:12:23.169100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.677 [2024-12-06 04:12:23.169111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:35.678 [2024-12-06 04:12:23.169121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:21:35.678 [2024-12-06 04:12:23.169128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.678 [2024-12-06 04:12:23.173388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.678 [2024-12-06 04:12:23.173421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:35.678 [2024-12-06 04:12:23.173435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.240 ms 00:21:35.678 [2024-12-06 04:12:23.173442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.678 [2024-12-06 04:12:23.180368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.678 [2024-12-06 04:12:23.180400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:35.678 [2024-12-06 04:12:23.180415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.891 ms 00:21:35.678 [2024-12-06 04:12:23.180424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.678 [2024-12-06 04:12:23.190759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.678 [2024-12-06 04:12:23.190799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:35.678 [2024-12-06 04:12:23.190814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.284 ms 00:21:35.678 [2024-12-06 04:12:23.190823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.678 [2024-12-06 04:12:23.198385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.678 [2024-12-06 04:12:23.198420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:35.678 [2024-12-06 04:12:23.198433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.522 ms 00:21:35.678 [2024-12-06 04:12:23.198442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.678 [2024-12-06 04:12:23.198582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.678 [2024-12-06 04:12:23.198593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:35.678 [2024-12-06 04:12:23.198604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:21:35.678 [2024-12-06 04:12:23.198611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.938 [2024-12-06 04:12:23.209154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.938 [2024-12-06 04:12:23.209205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:35.938 [2024-12-06 04:12:23.209217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.522 ms 00:21:35.938 [2024-12-06 04:12:23.209226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.938 [2024-12-06 04:12:23.219399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.938 [2024-12-06 04:12:23.219430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:35.938 [2024-12-06 04:12:23.219447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.131 ms 00:21:35.938 [2024-12-06 04:12:23.219456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.938 [2024-12-06 04:12:23.229125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.938 [2024-12-06 04:12:23.229157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:35.938 [2024-12-06 04:12:23.229169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.630 ms 00:21:35.938 [2024-12-06 04:12:23.229178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.938 [2024-12-06 04:12:23.239632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.938 [2024-12-06 04:12:23.239664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:35.938 [2024-12-06 04:12:23.239675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.391 ms 00:21:35.938 [2024-12-06 04:12:23.239684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.938 [2024-12-06 04:12:23.239727] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:35.938 [2024-12-06 04:12:23.239742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.239992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.240000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.240009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.240016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.240025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.240032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.240042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.240049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.240060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:35.938 [2024-12-06 04:12:23.240067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:35.939 [2024-12-06 04:12:23.240600] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:35.939 [2024-12-06 04:12:23.240613] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35e3cd2c-a5a2-441a-aebe-c05fd677fe36 00:21:35.939 [2024-12-06 04:12:23.240624] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:35.939 [2024-12-06 04:12:23.240632] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:35.939 [2024-12-06 04:12:23.240639] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:35.939 [2024-12-06 04:12:23.240649] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:35.939 [2024-12-06 04:12:23.240656] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:35.939 [2024-12-06 04:12:23.240665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:35.939 [2024-12-06 04:12:23.240672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:35.939 [2024-12-06 04:12:23.240680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:35.939 [2024-12-06 04:12:23.240686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:35.939 [2024-12-06 04:12:23.240695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.939 [2024-12-06 04:12:23.240702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:35.939 [2024-12-06 04:12:23.240712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:21:35.939 [2024-12-06 04:12:23.240728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.939 [2024-12-06 04:12:23.253472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.939 [2024-12-06 04:12:23.253504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:35.939 [2024-12-06 04:12:23.253519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.710 ms 00:21:35.939 [2024-12-06 04:12:23.253528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.939 [2024-12-06 04:12:23.253919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.939 [2024-12-06 04:12:23.253938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:35.939 [2024-12-06 04:12:23.253951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:21:35.939 [2024-12-06 04:12:23.253958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.939 [2024-12-06 04:12:23.297755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.939 [2024-12-06 04:12:23.297803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.939 [2024-12-06 04:12:23.297817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.939 [2024-12-06 04:12:23.297825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.939 [2024-12-06 04:12:23.297945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.939 [2024-12-06 04:12:23.297956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.939 [2024-12-06 04:12:23.297968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.939 [2024-12-06 04:12:23.297976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.939 [2024-12-06 04:12:23.298025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.939 [2024-12-06 04:12:23.298036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.939 [2024-12-06 04:12:23.298048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.939 [2024-12-06 04:12:23.298056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.939 [2024-12-06 04:12:23.298076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.940 [2024-12-06 04:12:23.298085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.940 [2024-12-06 04:12:23.298095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.940 [2024-12-06 04:12:23.298105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.940 [2024-12-06 04:12:23.374679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.940 [2024-12-06 04:12:23.374748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.940 [2024-12-06 04:12:23.374763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.940 [2024-12-06 04:12:23.374771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.940 [2024-12-06 04:12:23.437245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.940 [2024-12-06 04:12:23.437296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.940 [2024-12-06 04:12:23.437309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.940 [2024-12-06 04:12:23.437320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.940 [2024-12-06 04:12:23.437404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.940 [2024-12-06 04:12:23.437413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.940 [2024-12-06 04:12:23.437425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.940 [2024-12-06 04:12:23.437433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.940 [2024-12-06 04:12:23.437463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.940 [2024-12-06 04:12:23.437471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.940 [2024-12-06 04:12:23.437481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.940 [2024-12-06 04:12:23.437488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.940 [2024-12-06 04:12:23.437588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.940 [2024-12-06 04:12:23.437597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.940 [2024-12-06 04:12:23.437607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.940 [2024-12-06 04:12:23.437614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.940 [2024-12-06 04:12:23.437646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.940 [2024-12-06 04:12:23.437655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:35.940 [2024-12-06 04:12:23.437664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.940 [2024-12-06 04:12:23.437671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.940 [2024-12-06 04:12:23.437709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.940 [2024-12-06 04:12:23.437746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.940 [2024-12-06 04:12:23.437758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.940 [2024-12-06 04:12:23.437765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.940 [2024-12-06 04:12:23.437809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.940 [2024-12-06 04:12:23.437819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.940 [2024-12-06 04:12:23.437828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.940 [2024-12-06 04:12:23.437840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.940 [2024-12-06 04:12:23.437968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 271.887 ms, result 0 00:21:36.882 04:12:24 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:36.882 04:12:24 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:36.882 [2024-12-06 04:12:24.187944] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:21:36.882 [2024-12-06 04:12:24.188066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76644 ] 00:21:36.882 [2024-12-06 04:12:24.347205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.142 [2024-12-06 04:12:24.449556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.403 [2024-12-06 04:12:24.711577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:37.403 [2024-12-06 04:12:24.711649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:37.403 [2024-12-06 04:12:24.869839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-12-06 04:12:24.869892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:37.404 [2024-12-06 04:12:24.869905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:37.404 [2024-12-06 04:12:24.869914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-12-06 04:12:24.872579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-12-06 04:12:24.872616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.404 [2024-12-06 04:12:24.872626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.647 ms 00:21:37.404 [2024-12-06 04:12:24.872633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-12-06 04:12:24.872702] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:37.404 [2024-12-06 04:12:24.873421] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:37.404 [2024-12-06 04:12:24.873445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-12-06 04:12:24.873453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.404 [2024-12-06 04:12:24.873462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:21:37.404 [2024-12-06 04:12:24.873469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-12-06 04:12:24.874944] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:37.404 [2024-12-06 04:12:24.887606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-12-06 04:12:24.887645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:37.404 [2024-12-06 04:12:24.887659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.664 ms 00:21:37.404 [2024-12-06 04:12:24.887667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-12-06 04:12:24.887771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-12-06 04:12:24.887783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:37.404 [2024-12-06 04:12:24.887793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:37.404 [2024-12-06 04:12:24.887800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-12-06 04:12:24.892797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-12-06 04:12:24.892826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.404 [2024-12-06 04:12:24.892835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.955 ms 00:21:37.404 [2024-12-06 04:12:24.892847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-12-06 04:12:24.892931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-12-06 04:12:24.892940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.404 [2024-12-06 04:12:24.892949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:37.404 [2024-12-06 04:12:24.892956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-12-06 04:12:24.892984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-12-06 04:12:24.892992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:37.404 [2024-12-06 04:12:24.893000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:37.404 [2024-12-06 04:12:24.893007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-12-06 04:12:24.893028] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:37.404 [2024-12-06 04:12:24.896279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-12-06 04:12:24.896309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.404 [2024-12-06 04:12:24.896319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.257 ms 00:21:37.404 [2024-12-06 04:12:24.896328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-12-06 04:12:24.896365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-12-06 04:12:24.896374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:37.404 [2024-12-06 04:12:24.896384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:37.404 [2024-12-06 04:12:24.896396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-12-06 04:12:24.896416] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:37.404 [2024-12-06 04:12:24.896437] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:37.404 [2024-12-06 04:12:24.896474] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:37.404 [2024-12-06 04:12:24.896490] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:37.404 [2024-12-06 04:12:24.896594] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:37.404 [2024-12-06 04:12:24.896605] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:37.404 [2024-12-06 04:12:24.896617] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:37.404 [2024-12-06 04:12:24.896630] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:37.404 [2024-12-06 04:12:24.896640] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:37.404 [2024-12-06 04:12:24.896649] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:37.404 [2024-12-06 04:12:24.896657] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:37.404 [2024-12-06 04:12:24.896665] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:37.404 [2024-12-06 04:12:24.896673] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:37.404 [2024-12-06 04:12:24.896681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-12-06 04:12:24.896690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:37.404 [2024-12-06 04:12:24.896699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:21:37.404 [2024-12-06 04:12:24.896707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-12-06 04:12:24.896821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-12-06 04:12:24.896834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:37.404 [2024-12-06 04:12:24.896842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:37.404 [2024-12-06 04:12:24.896850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-12-06 04:12:24.896965] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:37.404 [2024-12-06 04:12:24.896977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:37.404 [2024-12-06 04:12:24.896986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.404 [2024-12-06 04:12:24.896995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.404 [2024-12-06 04:12:24.897003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:37.404 [2024-12-06 04:12:24.897011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:37.404 [2024-12-06 04:12:24.897019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:37.404 [2024-12-06 04:12:24.897027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:37.404 [2024-12-06 04:12:24.897035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:37.404 [2024-12-06 04:12:24.897042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.404 [2024-12-06 04:12:24.897050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:37.404 [2024-12-06 04:12:24.897063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:37.404 [2024-12-06 04:12:24.897070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.404 [2024-12-06 04:12:24.897078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:37.404 [2024-12-06 04:12:24.897086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:37.404 [2024-12-06 04:12:24.897093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.404 [2024-12-06 04:12:24.897101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:37.404 [2024-12-06 04:12:24.897108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:37.404 [2024-12-06 04:12:24.897116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.404 [2024-12-06 04:12:24.897123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:37.404 [2024-12-06 04:12:24.897131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:37.404 [2024-12-06 04:12:24.897138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.404 [2024-12-06 04:12:24.897146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:37.404 [2024-12-06 04:12:24.897153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:37.404 [2024-12-06 04:12:24.897161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.404 [2024-12-06 04:12:24.897169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:37.404 [2024-12-06 04:12:24.897176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:37.404 [2024-12-06 04:12:24.897184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.404 [2024-12-06 04:12:24.897191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:37.404 [2024-12-06 04:12:24.897198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:37.404 [2024-12-06 04:12:24.897206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.404 [2024-12-06 04:12:24.897213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:37.404 [2024-12-06 04:12:24.897222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:37.404 [2024-12-06 04:12:24.897230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.404 [2024-12-06 04:12:24.897238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:37.404 [2024-12-06 04:12:24.897245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:37.404 [2024-12-06 04:12:24.897252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.405 [2024-12-06 04:12:24.897260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:37.405 [2024-12-06 04:12:24.897269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:37.405 [2024-12-06 04:12:24.897276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.405 [2024-12-06 04:12:24.897283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:37.405 [2024-12-06 04:12:24.897291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:37.405 [2024-12-06 04:12:24.897299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.405 [2024-12-06 04:12:24.897307] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:37.405 [2024-12-06 04:12:24.897315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:37.405 [2024-12-06 04:12:24.897325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.405 [2024-12-06 04:12:24.897333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.405 [2024-12-06 04:12:24.897341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:37.405 [2024-12-06 04:12:24.897349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:37.405 [2024-12-06 04:12:24.897355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:37.405 [2024-12-06 04:12:24.897362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:37.405 [2024-12-06 04:12:24.897368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:37.405 [2024-12-06 04:12:24.897374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:37.405 [2024-12-06 04:12:24.897382] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:37.405 [2024-12-06 04:12:24.897391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.405 [2024-12-06 04:12:24.897399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:37.405 [2024-12-06 04:12:24.897406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:37.405 [2024-12-06 04:12:24.897413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:37.405 [2024-12-06 04:12:24.897420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:37.405 [2024-12-06 04:12:24.897427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:37.405 [2024-12-06 04:12:24.897434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:37.405 [2024-12-06 04:12:24.897441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:37.405 [2024-12-06 04:12:24.897447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:37.405 [2024-12-06 04:12:24.897454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:37.405 [2024-12-06 04:12:24.897463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:37.405 [2024-12-06 04:12:24.897470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:37.405 [2024-12-06 04:12:24.897477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:37.405 [2024-12-06 04:12:24.897484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:37.405 [2024-12-06 04:12:24.897491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:37.405 [2024-12-06 04:12:24.897498] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:37.405 [2024-12-06 04:12:24.897506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.405 [2024-12-06 04:12:24.897513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:37.405 [2024-12-06 04:12:24.897520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:37.405 [2024-12-06 04:12:24.897527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:37.405 [2024-12-06 04:12:24.897534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:37.405 [2024-12-06 04:12:24.897541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.405 [2024-12-06 04:12:24.897550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:37.405 [2024-12-06 04:12:24.897557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:21:37.405 [2024-12-06 04:12:24.897564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.405 [2024-12-06 04:12:24.923797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.405 [2024-12-06 04:12:24.923832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.405 [2024-12-06 04:12:24.923842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.180 ms 00:21:37.405 [2024-12-06 04:12:24.923850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.405 [2024-12-06 04:12:24.923968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.405 [2024-12-06 04:12:24.923979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:37.405 [2024-12-06 04:12:24.923987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:37.405 [2024-12-06 04:12:24.923995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.665 [2024-12-06 04:12:24.967734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.665 [2024-12-06 04:12:24.967779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.665 [2024-12-06 04:12:24.967795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.718 ms 00:21:37.665 [2024-12-06 04:12:24.967804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.665 [2024-12-06 04:12:24.967901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:24.967913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.666 [2024-12-06 04:12:24.967924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:37.666 [2024-12-06 04:12:24.967932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:24.968270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:24.968298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.666 [2024-12-06 04:12:24.968314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:21:37.666 [2024-12-06 04:12:24.968322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:24.968460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:24.968470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.666 [2024-12-06 04:12:24.968479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:21:37.666 [2024-12-06 04:12:24.968488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:24.982010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:24.982036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.666 [2024-12-06 04:12:24.982046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.501 ms 00:21:37.666 [2024-12-06 04:12:24.982053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:24.995456] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:37.666 [2024-12-06 04:12:24.995490] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:37.666 [2024-12-06 04:12:24.995502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:24.995510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:37.666 [2024-12-06 04:12:24.995519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.354 ms 00:21:37.666 [2024-12-06 04:12:24.995527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.020147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:25.020182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:37.666 [2024-12-06 04:12:25.020193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.550 ms 00:21:37.666 [2024-12-06 04:12:25.020203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.032262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:25.032292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:37.666 [2024-12-06 04:12:25.032301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.987 ms 00:21:37.666 [2024-12-06 04:12:25.032309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.044325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:25.044358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:37.666 [2024-12-06 04:12:25.044368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.952 ms 00:21:37.666 [2024-12-06 04:12:25.044376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.044995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:25.045021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:37.666 [2024-12-06 04:12:25.045030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:21:37.666 [2024-12-06 04:12:25.045038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.101256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:25.101309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:37.666 [2024-12-06 04:12:25.101322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.194 ms 00:21:37.666 [2024-12-06 04:12:25.101330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.111751] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:37.666 [2024-12-06 04:12:25.126328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:25.126372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:37.666 [2024-12-06 04:12:25.126383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.881 ms 00:21:37.666 [2024-12-06 04:12:25.126396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.126498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:25.126510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:37.666 [2024-12-06 04:12:25.126519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:37.666 [2024-12-06 04:12:25.126526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.126572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:25.126581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:37.666 [2024-12-06 04:12:25.126589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:37.666 [2024-12-06 04:12:25.126600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.126628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:25.126636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:37.666 [2024-12-06 04:12:25.126643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:37.666 [2024-12-06 04:12:25.126651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.126683] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:37.666 [2024-12-06 04:12:25.126693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:25.126701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:37.666 [2024-12-06 04:12:25.126708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:37.666 [2024-12-06 04:12:25.126735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.150579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:25.150615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:37.666 [2024-12-06 04:12:25.150627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.822 ms 00:21:37.666 [2024-12-06 04:12:25.150635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.150733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.666 [2024-12-06 04:12:25.150744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:37.666 [2024-12-06 04:12:25.150753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:37.666 [2024-12-06 04:12:25.150761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.666 [2024-12-06 04:12:25.151598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:37.666 [2024-12-06 04:12:25.154788] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 281.489 ms, result 0 00:21:37.666 [2024-12-06 04:12:25.156135] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:37.666 [2024-12-06 04:12:25.168903] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:39.045  [2024-12-06T04:12:27.514Z] Copying: 16/256 [MB] (16 MBps) [2024-12-06T04:12:28.460Z] Copying: 27/256 [MB] (11 MBps) [2024-12-06T04:12:29.406Z] Copying: 38/256 [MB] (10 MBps) [2024-12-06T04:12:30.351Z] Copying: 49236/262144 [kB] (10072 kBps) [2024-12-06T04:12:31.316Z] Copying: 59140/262144 [kB] (9904 kBps) [2024-12-06T04:12:32.282Z] Copying: 69052/262144 [kB] (9912 kBps) [2024-12-06T04:12:33.226Z] Copying: 79168/262144 [kB] (10116 kBps) [2024-12-06T04:12:34.614Z] Copying: 87/256 [MB] (10 MBps) [2024-12-06T04:12:35.186Z] Copying: 97/256 [MB] (10 MBps) [2024-12-06T04:12:36.571Z] Copying: 107/256 [MB] (10 MBps) [2024-12-06T04:12:37.512Z] Copying: 117/256 [MB] (10 MBps) [2024-12-06T04:12:38.454Z] Copying: 128/256 [MB] (10 MBps) [2024-12-06T04:12:39.429Z] Copying: 141416/262144 [kB] (9912 kBps) [2024-12-06T04:12:40.373Z] Copying: 151552/262144 [kB] (10136 kBps) [2024-12-06T04:12:41.318Z] Copying: 161532/262144 [kB] (9980 kBps) [2024-12-06T04:12:42.262Z] Copying: 171676/262144 [kB] (10144 kBps) [2024-12-06T04:12:43.208Z] Copying: 181768/262144 [kB] (10092 kBps) [2024-12-06T04:12:44.598Z] Copying: 191968/262144 [kB] (10200 kBps) [2024-12-06T04:12:45.172Z] Copying: 197/256 [MB] (10 MBps) [2024-12-06T04:12:46.558Z] Copying: 207/256 [MB] (10 MBps) [2024-12-06T04:12:47.502Z] Copying: 217/256 [MB] (10 MBps) [2024-12-06T04:12:48.446Z] Copying: 233056/262144 [kB] (10016 kBps) [2024-12-06T04:12:49.392Z] Copying: 243196/262144 [kB] (10140 kBps) [2024-12-06T04:12:49.966Z] Copying: 247/256 [MB] (10 MBps) [2024-12-06T04:12:49.966Z] Copying: 256/256 [MB] (average 10 MBps)[2024-12-06 04:12:49.949694] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:02.439 [2024-12-06 04:12:49.959831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.439 [2024-12-06 04:12:49.959883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:02.439 [2024-12-06 04:12:49.959907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:02.439 [2024-12-06 04:12:49.959916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.439 [2024-12-06 04:12:49.959940] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:02.439 [2024-12-06 04:12:49.962847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.439 [2024-12-06 04:12:49.962893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:02.439 [2024-12-06 04:12:49.962904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:22:02.439 [2024-12-06 04:12:49.962913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.439 [2024-12-06 04:12:49.963179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.439 [2024-12-06 04:12:49.963191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:02.439 [2024-12-06 04:12:49.963200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:22:02.439 [2024-12-06 04:12:49.963208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-12-06 04:12:49.966900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-12-06 04:12:49.966929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:02.702 [2024-12-06 04:12:49.966940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:22:02.702 [2024-12-06 04:12:49.966948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-12-06 04:12:49.973817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-12-06 04:12:49.973862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:02.702 [2024-12-06 04:12:49.973872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.851 ms 00:22:02.702 [2024-12-06 04:12:49.973880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-12-06 04:12:49.999119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-12-06 04:12:49.999175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:02.702 [2024-12-06 04:12:49.999188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.172 ms 00:22:02.702 [2024-12-06 04:12:49.999197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-12-06 04:12:50.015554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-12-06 04:12:50.015605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:02.702 [2024-12-06 04:12:50.015623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.307 ms 00:22:02.702 [2024-12-06 04:12:50.015631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-12-06 04:12:50.015816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-12-06 04:12:50.015829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:02.702 [2024-12-06 04:12:50.015849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:22:02.702 [2024-12-06 04:12:50.015857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-12-06 04:12:50.042053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-12-06 04:12:50.042126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:02.702 [2024-12-06 04:12:50.042140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.179 ms 00:22:02.702 [2024-12-06 04:12:50.042148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-12-06 04:12:50.067845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-12-06 04:12:50.067900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:02.702 [2024-12-06 04:12:50.067913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.633 ms 00:22:02.702 [2024-12-06 04:12:50.067921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-12-06 04:12:50.093228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-12-06 04:12:50.093284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:02.702 [2024-12-06 04:12:50.093297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.244 ms 00:22:02.702 [2024-12-06 04:12:50.093306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-12-06 04:12:50.118149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-12-06 04:12:50.118202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:02.702 [2024-12-06 04:12:50.118214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.744 ms 00:22:02.702 [2024-12-06 04:12:50.118222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-12-06 04:12:50.118270] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:02.703 [2024-12-06 04:12:50.118289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.118992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.119001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.119008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.119017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.119024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.119032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.119040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-12-06 04:12:50.119048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-12-06 04:12:50.119056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-12-06 04:12:50.119065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-12-06 04:12:50.119074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-12-06 04:12:50.119094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-12-06 04:12:50.119103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-12-06 04:12:50.119111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-12-06 04:12:50.119120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-12-06 04:12:50.119128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-12-06 04:12:50.119137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-12-06 04:12:50.119145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-12-06 04:12:50.119163] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:02.704 [2024-12-06 04:12:50.119172] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35e3cd2c-a5a2-441a-aebe-c05fd677fe36 00:22:02.704 [2024-12-06 04:12:50.119182] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:02.704 [2024-12-06 04:12:50.119191] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:02.704 [2024-12-06 04:12:50.119199] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:02.704 [2024-12-06 04:12:50.119206] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:02.704 [2024-12-06 04:12:50.119220] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:02.704 [2024-12-06 04:12:50.119229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:02.704 [2024-12-06 04:12:50.119241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:02.704 [2024-12-06 04:12:50.119249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:02.704 [2024-12-06 04:12:50.119255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:02.704 [2024-12-06 04:12:50.119263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.704 [2024-12-06 04:12:50.119271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:02.704 [2024-12-06 04:12:50.119281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:22:02.704 [2024-12-06 04:12:50.119288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.704 [2024-12-06 04:12:50.133044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.704 [2024-12-06 04:12:50.133093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:02.704 [2024-12-06 04:12:50.133105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.723 ms 00:22:02.704 [2024-12-06 04:12:50.133114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.704 [2024-12-06 04:12:50.133522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.704 [2024-12-06 04:12:50.133540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:02.704 [2024-12-06 04:12:50.133550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:22:02.704 [2024-12-06 04:12:50.133558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.704 [2024-12-06 04:12:50.171997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.704 [2024-12-06 04:12:50.172052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.704 [2024-12-06 04:12:50.172064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.704 [2024-12-06 04:12:50.172079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.704 [2024-12-06 04:12:50.172172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.704 [2024-12-06 04:12:50.172183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.704 [2024-12-06 04:12:50.172192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.704 [2024-12-06 04:12:50.172200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.704 [2024-12-06 04:12:50.172254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.704 [2024-12-06 04:12:50.172265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.704 [2024-12-06 04:12:50.172273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.704 [2024-12-06 04:12:50.172282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.704 [2024-12-06 04:12:50.172305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.704 [2024-12-06 04:12:50.172314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.704 [2024-12-06 04:12:50.172323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.704 [2024-12-06 04:12:50.172330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.966 [2024-12-06 04:12:50.257790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.966 [2024-12-06 04:12:50.257856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.966 [2024-12-06 04:12:50.257871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.966 [2024-12-06 04:12:50.257880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.966 [2024-12-06 04:12:50.327506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.966 [2024-12-06 04:12:50.327572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.966 [2024-12-06 04:12:50.327585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.966 [2024-12-06 04:12:50.327594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.966 [2024-12-06 04:12:50.327656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.966 [2024-12-06 04:12:50.327666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.966 [2024-12-06 04:12:50.327675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.966 [2024-12-06 04:12:50.327684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.966 [2024-12-06 04:12:50.327739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.966 [2024-12-06 04:12:50.327757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.966 [2024-12-06 04:12:50.327767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.966 [2024-12-06 04:12:50.327775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.966 [2024-12-06 04:12:50.327874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.966 [2024-12-06 04:12:50.327886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.966 [2024-12-06 04:12:50.327895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.966 [2024-12-06 04:12:50.327903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.966 [2024-12-06 04:12:50.327937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.966 [2024-12-06 04:12:50.327947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:02.966 [2024-12-06 04:12:50.327959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.966 [2024-12-06 04:12:50.327967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.966 [2024-12-06 04:12:50.328012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.966 [2024-12-06 04:12:50.328022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.966 [2024-12-06 04:12:50.328031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.966 [2024-12-06 04:12:50.328040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.966 [2024-12-06 04:12:50.328088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.966 [2024-12-06 04:12:50.328102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.966 [2024-12-06 04:12:50.328110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.966 [2024-12-06 04:12:50.328118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.966 [2024-12-06 04:12:50.328274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.432 ms, result 0 00:22:03.943 00:22:03.943 00:22:03.943 04:12:51 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:03.943 04:12:51 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:04.220 04:12:51 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:04.482 [2024-12-06 04:12:51.748873] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:22:04.482 [2024-12-06 04:12:51.749204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76937 ] 00:22:04.482 [2024-12-06 04:12:51.910339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.744 [2024-12-06 04:12:52.035046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.007 [2024-12-06 04:12:52.330581] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:05.007 [2024-12-06 04:12:52.330677] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:05.007 [2024-12-06 04:12:52.492585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.007 [2024-12-06 04:12:52.492656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:05.007 [2024-12-06 04:12:52.492672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:05.007 [2024-12-06 04:12:52.492681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.007 [2024-12-06 04:12:52.495746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.007 [2024-12-06 04:12:52.495799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:05.007 [2024-12-06 04:12:52.495810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.043 ms 00:22:05.007 [2024-12-06 04:12:52.495818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.007 [2024-12-06 04:12:52.495940] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:05.007 [2024-12-06 04:12:52.496690] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:05.007 [2024-12-06 04:12:52.496730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.007 [2024-12-06 04:12:52.496739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:05.007 [2024-12-06 04:12:52.496749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:22:05.007 [2024-12-06 04:12:52.496757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.007 [2024-12-06 04:12:52.498646] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:05.007 [2024-12-06 04:12:52.512970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.007 [2024-12-06 04:12:52.513023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:05.007 [2024-12-06 04:12:52.513037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.327 ms 00:22:05.007 [2024-12-06 04:12:52.513046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.007 [2024-12-06 04:12:52.513167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.007 [2024-12-06 04:12:52.513180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:05.007 [2024-12-06 04:12:52.513191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:05.007 [2024-12-06 04:12:52.513199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.007 [2024-12-06 04:12:52.521433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.007 [2024-12-06 04:12:52.521483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:05.007 [2024-12-06 04:12:52.521494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.187 ms 00:22:05.007 [2024-12-06 04:12:52.521502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.007 [2024-12-06 04:12:52.521613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.007 [2024-12-06 04:12:52.521625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:05.007 [2024-12-06 04:12:52.521634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:05.007 [2024-12-06 04:12:52.521643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.007 [2024-12-06 04:12:52.521677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.007 [2024-12-06 04:12:52.521686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:05.007 [2024-12-06 04:12:52.521694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:05.007 [2024-12-06 04:12:52.521702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.007 [2024-12-06 04:12:52.521743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:05.007 [2024-12-06 04:12:52.525774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.007 [2024-12-06 04:12:52.525817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:05.007 [2024-12-06 04:12:52.525827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.038 ms 00:22:05.007 [2024-12-06 04:12:52.525837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.007 [2024-12-06 04:12:52.525915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.007 [2024-12-06 04:12:52.525926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:05.007 [2024-12-06 04:12:52.525935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:05.007 [2024-12-06 04:12:52.525943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.007 [2024-12-06 04:12:52.525968] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:05.007 [2024-12-06 04:12:52.525991] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:05.007 [2024-12-06 04:12:52.526029] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:05.007 [2024-12-06 04:12:52.526045] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:05.007 [2024-12-06 04:12:52.526154] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:05.007 [2024-12-06 04:12:52.526166] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:05.007 [2024-12-06 04:12:52.526177] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:05.007 [2024-12-06 04:12:52.526190] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:05.007 [2024-12-06 04:12:52.526200] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:05.007 [2024-12-06 04:12:52.526209] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:05.007 [2024-12-06 04:12:52.526217] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:05.007 [2024-12-06 04:12:52.526224] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:05.007 [2024-12-06 04:12:52.526232] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:05.007 [2024-12-06 04:12:52.526240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.007 [2024-12-06 04:12:52.526248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:05.007 [2024-12-06 04:12:52.526256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:22:05.007 [2024-12-06 04:12:52.526264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.007 [2024-12-06 04:12:52.526352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.007 [2024-12-06 04:12:52.526364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:05.008 [2024-12-06 04:12:52.526372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:05.008 [2024-12-06 04:12:52.526379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.008 [2024-12-06 04:12:52.526512] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:05.008 [2024-12-06 04:12:52.526524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:05.008 [2024-12-06 04:12:52.526533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:05.008 [2024-12-06 04:12:52.526541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:05.008 [2024-12-06 04:12:52.526557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:05.008 [2024-12-06 04:12:52.526570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:05.008 [2024-12-06 04:12:52.526578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:05.008 [2024-12-06 04:12:52.526591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:05.008 [2024-12-06 04:12:52.526608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:05.008 [2024-12-06 04:12:52.526617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:05.008 [2024-12-06 04:12:52.526624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:05.008 [2024-12-06 04:12:52.526631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:05.008 [2024-12-06 04:12:52.526638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:05.008 [2024-12-06 04:12:52.526652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:05.008 [2024-12-06 04:12:52.526659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:05.008 [2024-12-06 04:12:52.526674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.008 [2024-12-06 04:12:52.526688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:05.008 [2024-12-06 04:12:52.526695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.008 [2024-12-06 04:12:52.526709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:05.008 [2024-12-06 04:12:52.526733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.008 [2024-12-06 04:12:52.526747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:05.008 [2024-12-06 04:12:52.526754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.008 [2024-12-06 04:12:52.526768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:05.008 [2024-12-06 04:12:52.526774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:05.008 [2024-12-06 04:12:52.526789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:05.008 [2024-12-06 04:12:52.526795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:05.008 [2024-12-06 04:12:52.526802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:05.008 [2024-12-06 04:12:52.526808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:05.008 [2024-12-06 04:12:52.526815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:05.008 [2024-12-06 04:12:52.526821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:05.008 [2024-12-06 04:12:52.526834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:05.008 [2024-12-06 04:12:52.526841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526848] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:05.008 [2024-12-06 04:12:52.526858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:05.008 [2024-12-06 04:12:52.526868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:05.008 [2024-12-06 04:12:52.526876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.008 [2024-12-06 04:12:52.526885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:05.008 [2024-12-06 04:12:52.526892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:05.008 [2024-12-06 04:12:52.526899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:05.008 [2024-12-06 04:12:52.526905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:05.008 [2024-12-06 04:12:52.526912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:05.008 [2024-12-06 04:12:52.526919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:05.008 [2024-12-06 04:12:52.526927] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:05.008 [2024-12-06 04:12:52.526937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:05.008 [2024-12-06 04:12:52.526945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:05.008 [2024-12-06 04:12:52.526953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:05.008 [2024-12-06 04:12:52.526960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:05.008 [2024-12-06 04:12:52.526966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:05.008 [2024-12-06 04:12:52.526973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:05.008 [2024-12-06 04:12:52.526980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:05.008 [2024-12-06 04:12:52.526987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:05.008 [2024-12-06 04:12:52.526994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:05.008 [2024-12-06 04:12:52.527001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:05.008 [2024-12-06 04:12:52.527009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:05.008 [2024-12-06 04:12:52.527016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:05.008 [2024-12-06 04:12:52.527023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:05.008 [2024-12-06 04:12:52.527030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:05.008 [2024-12-06 04:12:52.527037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:05.008 [2024-12-06 04:12:52.527045] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:05.008 [2024-12-06 04:12:52.527053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:05.008 [2024-12-06 04:12:52.527061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:05.008 [2024-12-06 04:12:52.527068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:05.008 [2024-12-06 04:12:52.527076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:05.009 [2024-12-06 04:12:52.527083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:05.009 [2024-12-06 04:12:52.527092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.009 [2024-12-06 04:12:52.527111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:05.009 [2024-12-06 04:12:52.527119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:22:05.009 [2024-12-06 04:12:52.527127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.270 [2024-12-06 04:12:52.559100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.270 [2024-12-06 04:12:52.559156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:05.270 [2024-12-06 04:12:52.559168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.906 ms 00:22:05.270 [2024-12-06 04:12:52.559177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.270 [2024-12-06 04:12:52.559319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.270 [2024-12-06 04:12:52.559331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:05.270 [2024-12-06 04:12:52.559340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:05.270 [2024-12-06 04:12:52.559348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.270 [2024-12-06 04:12:52.610142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.270 [2024-12-06 04:12:52.610202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:05.270 [2024-12-06 04:12:52.610220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.768 ms 00:22:05.270 [2024-12-06 04:12:52.610230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.270 [2024-12-06 04:12:52.610342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.270 [2024-12-06 04:12:52.610355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:05.271 [2024-12-06 04:12:52.610365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:05.271 [2024-12-06 04:12:52.610374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.611010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.611032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:05.271 [2024-12-06 04:12:52.611052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:22:05.271 [2024-12-06 04:12:52.611060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.611219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.611230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:05.271 [2024-12-06 04:12:52.611239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:22:05.271 [2024-12-06 04:12:52.611247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.627608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.627661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:05.271 [2024-12-06 04:12:52.627673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.338 ms 00:22:05.271 [2024-12-06 04:12:52.627681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.642241] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:05.271 [2024-12-06 04:12:52.642310] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:05.271 [2024-12-06 04:12:52.642324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.642333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:05.271 [2024-12-06 04:12:52.642343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.506 ms 00:22:05.271 [2024-12-06 04:12:52.642350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.668337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.668392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:05.271 [2024-12-06 04:12:52.668405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.888 ms 00:22:05.271 [2024-12-06 04:12:52.668413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.681230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.681281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:05.271 [2024-12-06 04:12:52.681293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.719 ms 00:22:05.271 [2024-12-06 04:12:52.681300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.694068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.694119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:05.271 [2024-12-06 04:12:52.694131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.678 ms 00:22:05.271 [2024-12-06 04:12:52.694138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.694869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.694899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:05.271 [2024-12-06 04:12:52.694910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:22:05.271 [2024-12-06 04:12:52.694918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.761692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.761779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:05.271 [2024-12-06 04:12:52.761795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.743 ms 00:22:05.271 [2024-12-06 04:12:52.761805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.773039] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:05.271 [2024-12-06 04:12:52.792700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.792774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:05.271 [2024-12-06 04:12:52.792788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.788 ms 00:22:05.271 [2024-12-06 04:12:52.792804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.792913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.792925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:05.271 [2024-12-06 04:12:52.792935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:05.271 [2024-12-06 04:12:52.792943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.793005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.793015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:05.271 [2024-12-06 04:12:52.793024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:05.271 [2024-12-06 04:12:52.793038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.793071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.793080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:05.271 [2024-12-06 04:12:52.793089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:05.271 [2024-12-06 04:12:52.793097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.271 [2024-12-06 04:12:52.793138] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:05.271 [2024-12-06 04:12:52.793150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.271 [2024-12-06 04:12:52.793158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:05.271 [2024-12-06 04:12:52.793167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:05.271 [2024-12-06 04:12:52.793175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.533 [2024-12-06 04:12:52.819314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.533 [2024-12-06 04:12:52.819368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:05.533 [2024-12-06 04:12:52.819382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.116 ms 00:22:05.533 [2024-12-06 04:12:52.819391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.533 [2024-12-06 04:12:52.819509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.533 [2024-12-06 04:12:52.819522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:05.533 [2024-12-06 04:12:52.819532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:05.533 [2024-12-06 04:12:52.819541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.533 [2024-12-06 04:12:52.820771] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:05.533 [2024-12-06 04:12:52.824242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.836 ms, result 0 00:22:05.533 [2024-12-06 04:12:52.825691] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:05.533 [2024-12-06 04:12:52.839355] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:05.796  [2024-12-06T04:12:53.323Z] Copying: 4096/4096 [kB] (average 10240 kBps)[2024-12-06 04:12:53.242545] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:05.796 [2024-12-06 04:12:53.251983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.796 [2024-12-06 04:12:53.252039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:05.796 [2024-12-06 04:12:53.252061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:05.796 [2024-12-06 04:12:53.252070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.796 [2024-12-06 04:12:53.252093] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:05.796 [2024-12-06 04:12:53.255047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.796 [2024-12-06 04:12:53.255093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:05.796 [2024-12-06 04:12:53.255104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.939 ms 00:22:05.796 [2024-12-06 04:12:53.255113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.796 [2024-12-06 04:12:53.258238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.796 [2024-12-06 04:12:53.258288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:05.796 [2024-12-06 04:12:53.258299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.096 ms 00:22:05.796 [2024-12-06 04:12:53.258307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.796 [2024-12-06 04:12:53.262755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.796 [2024-12-06 04:12:53.262796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:05.796 [2024-12-06 04:12:53.262808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.425 ms 00:22:05.796 [2024-12-06 04:12:53.262816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.796 [2024-12-06 04:12:53.269747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.796 [2024-12-06 04:12:53.269792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:05.796 [2024-12-06 04:12:53.269804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.898 ms 00:22:05.796 [2024-12-06 04:12:53.269813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.796 [2024-12-06 04:12:53.295370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.796 [2024-12-06 04:12:53.295423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:05.796 [2024-12-06 04:12:53.295436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.503 ms 00:22:05.796 [2024-12-06 04:12:53.295443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.796 [2024-12-06 04:12:53.311951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.796 [2024-12-06 04:12:53.312010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:05.796 [2024-12-06 04:12:53.312022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.446 ms 00:22:05.796 [2024-12-06 04:12:53.312030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.796 [2024-12-06 04:12:53.312203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.796 [2024-12-06 04:12:53.312215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:05.796 [2024-12-06 04:12:53.312234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:22:05.796 [2024-12-06 04:12:53.312243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.059 [2024-12-06 04:12:53.338361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.059 [2024-12-06 04:12:53.338409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:06.059 [2024-12-06 04:12:53.338421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.101 ms 00:22:06.059 [2024-12-06 04:12:53.338429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.059 [2024-12-06 04:12:53.363438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.059 [2024-12-06 04:12:53.363490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:06.059 [2024-12-06 04:12:53.363502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.933 ms 00:22:06.059 [2024-12-06 04:12:53.363509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.059 [2024-12-06 04:12:53.387851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.059 [2024-12-06 04:12:53.387905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:06.059 [2024-12-06 04:12:53.387917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.292 ms 00:22:06.059 [2024-12-06 04:12:53.387924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.059 [2024-12-06 04:12:53.412947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.059 [2024-12-06 04:12:53.412998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:06.059 [2024-12-06 04:12:53.413010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.943 ms 00:22:06.059 [2024-12-06 04:12:53.413017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.060 [2024-12-06 04:12:53.413086] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:06.060 [2024-12-06 04:12:53.413102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:06.060 [2024-12-06 04:12:53.413766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:06.061 [2024-12-06 04:12:53.413903] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:06.061 [2024-12-06 04:12:53.413915] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35e3cd2c-a5a2-441a-aebe-c05fd677fe36 00:22:06.061 [2024-12-06 04:12:53.413924] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:06.061 [2024-12-06 04:12:53.413931] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:06.061 [2024-12-06 04:12:53.413939] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:06.061 [2024-12-06 04:12:53.413948] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:06.061 [2024-12-06 04:12:53.413955] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:06.061 [2024-12-06 04:12:53.413964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:06.061 [2024-12-06 04:12:53.413975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:06.061 [2024-12-06 04:12:53.413982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:06.061 [2024-12-06 04:12:53.413989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:06.061 [2024-12-06 04:12:53.414002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.061 [2024-12-06 04:12:53.414011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:06.061 [2024-12-06 04:12:53.414020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:22:06.061 [2024-12-06 04:12:53.414028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.061 [2024-12-06 04:12:53.427932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.061 [2024-12-06 04:12:53.427980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:06.061 [2024-12-06 04:12:53.427992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.869 ms 00:22:06.061 [2024-12-06 04:12:53.428000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.061 [2024-12-06 04:12:53.428401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.061 [2024-12-06 04:12:53.428411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:06.061 [2024-12-06 04:12:53.428421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:22:06.061 [2024-12-06 04:12:53.428429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.061 [2024-12-06 04:12:53.467224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.061 [2024-12-06 04:12:53.467281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.061 [2024-12-06 04:12:53.467294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.061 [2024-12-06 04:12:53.467309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.061 [2024-12-06 04:12:53.467397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.061 [2024-12-06 04:12:53.467406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.061 [2024-12-06 04:12:53.467417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.061 [2024-12-06 04:12:53.467425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.061 [2024-12-06 04:12:53.467479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.061 [2024-12-06 04:12:53.467489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.061 [2024-12-06 04:12:53.467498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.061 [2024-12-06 04:12:53.467507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.061 [2024-12-06 04:12:53.467529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.061 [2024-12-06 04:12:53.467538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.061 [2024-12-06 04:12:53.467546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.061 [2024-12-06 04:12:53.467554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.061 [2024-12-06 04:12:53.553194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.061 [2024-12-06 04:12:53.553257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.061 [2024-12-06 04:12:53.553273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.061 [2024-12-06 04:12:53.553288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.323 [2024-12-06 04:12:53.622771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.323 [2024-12-06 04:12:53.622835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.323 [2024-12-06 04:12:53.622849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.323 [2024-12-06 04:12:53.622858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.323 [2024-12-06 04:12:53.622915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.323 [2024-12-06 04:12:53.622925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.323 [2024-12-06 04:12:53.622934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.323 [2024-12-06 04:12:53.622943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.323 [2024-12-06 04:12:53.622975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.323 [2024-12-06 04:12:53.622991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.323 [2024-12-06 04:12:53.623000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.323 [2024-12-06 04:12:53.623010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.323 [2024-12-06 04:12:53.623105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.323 [2024-12-06 04:12:53.623117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.323 [2024-12-06 04:12:53.623125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.323 [2024-12-06 04:12:53.623133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.323 [2024-12-06 04:12:53.623168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.323 [2024-12-06 04:12:53.623177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:06.323 [2024-12-06 04:12:53.623189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.323 [2024-12-06 04:12:53.623197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.323 [2024-12-06 04:12:53.623242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.323 [2024-12-06 04:12:53.623252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.323 [2024-12-06 04:12:53.623261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.323 [2024-12-06 04:12:53.623269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.323 [2024-12-06 04:12:53.623320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.323 [2024-12-06 04:12:53.623334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.323 [2024-12-06 04:12:53.623342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.323 [2024-12-06 04:12:53.623350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.323 [2024-12-06 04:12:53.623511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.514 ms, result 0 00:22:06.896 00:22:06.896 00:22:06.896 04:12:54 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76964 00:22:06.896 04:12:54 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76964 00:22:06.896 04:12:54 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76964 ']' 00:22:06.896 04:12:54 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.897 04:12:54 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.897 04:12:54 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.897 04:12:54 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.897 04:12:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:06.897 04:12:54 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:07.157 [2024-12-06 04:12:54.485099] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:22:07.157 [2024-12-06 04:12:54.485264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76964 ] 00:22:07.157 [2024-12-06 04:12:54.649170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.419 [2024-12-06 04:12:54.754612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.993 04:12:55 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.993 04:12:55 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:07.993 04:12:55 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:08.254 [2024-12-06 04:12:55.539761] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:08.254 [2024-12-06 04:12:55.539823] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:08.254 [2024-12-06 04:12:55.713839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.254 [2024-12-06 04:12:55.713892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:08.254 [2024-12-06 04:12:55.713909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:08.254 [2024-12-06 04:12:55.713917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.254 [2024-12-06 04:12:55.716709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.254 [2024-12-06 04:12:55.716768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:08.254 [2024-12-06 04:12:55.716780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.772 ms 00:22:08.254 [2024-12-06 04:12:55.716788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.254 [2024-12-06 04:12:55.716888] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:08.254 [2024-12-06 04:12:55.717571] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:08.254 [2024-12-06 04:12:55.717599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.254 [2024-12-06 04:12:55.717607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:08.254 [2024-12-06 04:12:55.717618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:22:08.254 [2024-12-06 04:12:55.717625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.254 [2024-12-06 04:12:55.719073] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:08.254 [2024-12-06 04:12:55.732449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.254 [2024-12-06 04:12:55.732495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:08.254 [2024-12-06 04:12:55.732508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.379 ms 00:22:08.254 [2024-12-06 04:12:55.732519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.254 [2024-12-06 04:12:55.732617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.254 [2024-12-06 04:12:55.732631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:08.254 [2024-12-06 04:12:55.732640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:08.254 [2024-12-06 04:12:55.732652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.254 [2024-12-06 04:12:55.739352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.254 [2024-12-06 04:12:55.739397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:08.254 [2024-12-06 04:12:55.739407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.649 ms 00:22:08.254 [2024-12-06 04:12:55.739416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.254 [2024-12-06 04:12:55.739519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.254 [2024-12-06 04:12:55.739532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:08.254 [2024-12-06 04:12:55.739540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:08.254 [2024-12-06 04:12:55.739554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.254 [2024-12-06 04:12:55.739580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.254 [2024-12-06 04:12:55.739591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:08.254 [2024-12-06 04:12:55.739598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:08.254 [2024-12-06 04:12:55.739607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.254 [2024-12-06 04:12:55.739629] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:08.254 [2024-12-06 04:12:55.743271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.254 [2024-12-06 04:12:55.743308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:08.254 [2024-12-06 04:12:55.743320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.644 ms 00:22:08.254 [2024-12-06 04:12:55.743328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.254 [2024-12-06 04:12:55.743385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.254 [2024-12-06 04:12:55.743394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:08.254 [2024-12-06 04:12:55.743405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:08.254 [2024-12-06 04:12:55.743415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.254 [2024-12-06 04:12:55.743437] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:08.254 [2024-12-06 04:12:55.743458] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:08.254 [2024-12-06 04:12:55.743503] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:08.254 [2024-12-06 04:12:55.743519] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:08.254 [2024-12-06 04:12:55.743626] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:08.254 [2024-12-06 04:12:55.743638] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:08.254 [2024-12-06 04:12:55.743652] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:08.254 [2024-12-06 04:12:55.743662] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:08.254 [2024-12-06 04:12:55.743674] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:08.254 [2024-12-06 04:12:55.743682] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:08.254 [2024-12-06 04:12:55.743691] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:08.254 [2024-12-06 04:12:55.743698] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:08.254 [2024-12-06 04:12:55.743709] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:08.254 [2024-12-06 04:12:55.743731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.254 [2024-12-06 04:12:55.743740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:08.254 [2024-12-06 04:12:55.743748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:22:08.254 [2024-12-06 04:12:55.743757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.254 [2024-12-06 04:12:55.743861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.254 [2024-12-06 04:12:55.743874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:08.254 [2024-12-06 04:12:55.743883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:08.254 [2024-12-06 04:12:55.743892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.254 [2024-12-06 04:12:55.743993] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:08.254 [2024-12-06 04:12:55.744011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:08.254 [2024-12-06 04:12:55.744020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:08.254 [2024-12-06 04:12:55.744030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.254 [2024-12-06 04:12:55.744038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:08.254 [2024-12-06 04:12:55.744048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:08.254 [2024-12-06 04:12:55.744055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:08.254 [2024-12-06 04:12:55.744066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:08.254 [2024-12-06 04:12:55.744073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:08.254 [2024-12-06 04:12:55.744082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:08.254 [2024-12-06 04:12:55.744089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:08.254 [2024-12-06 04:12:55.744097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:08.254 [2024-12-06 04:12:55.744103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:08.254 [2024-12-06 04:12:55.744112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:08.254 [2024-12-06 04:12:55.744121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:08.255 [2024-12-06 04:12:55.744130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.255 [2024-12-06 04:12:55.744137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:08.255 [2024-12-06 04:12:55.744146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:08.255 [2024-12-06 04:12:55.744157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.255 [2024-12-06 04:12:55.744166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:08.255 [2024-12-06 04:12:55.744173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:08.255 [2024-12-06 04:12:55.744182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.255 [2024-12-06 04:12:55.744189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:08.255 [2024-12-06 04:12:55.744199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:08.255 [2024-12-06 04:12:55.744205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.255 [2024-12-06 04:12:55.744214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:08.255 [2024-12-06 04:12:55.744220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:08.255 [2024-12-06 04:12:55.744229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.255 [2024-12-06 04:12:55.744235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:08.255 [2024-12-06 04:12:55.744245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:08.255 [2024-12-06 04:12:55.744251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.255 [2024-12-06 04:12:55.744260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:08.255 [2024-12-06 04:12:55.744266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:08.255 [2024-12-06 04:12:55.744274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:08.255 [2024-12-06 04:12:55.744280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:08.255 [2024-12-06 04:12:55.744288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:08.255 [2024-12-06 04:12:55.744295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:08.255 [2024-12-06 04:12:55.744303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:08.255 [2024-12-06 04:12:55.744310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:08.255 [2024-12-06 04:12:55.744320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.255 [2024-12-06 04:12:55.744327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:08.255 [2024-12-06 04:12:55.744335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:08.255 [2024-12-06 04:12:55.744341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.255 [2024-12-06 04:12:55.744349] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:08.255 [2024-12-06 04:12:55.744358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:08.255 [2024-12-06 04:12:55.744367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:08.255 [2024-12-06 04:12:55.744375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.255 [2024-12-06 04:12:55.744385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:08.255 [2024-12-06 04:12:55.744392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:08.255 [2024-12-06 04:12:55.744400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:08.255 [2024-12-06 04:12:55.744407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:08.255 [2024-12-06 04:12:55.744416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:08.255 [2024-12-06 04:12:55.744422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:08.255 [2024-12-06 04:12:55.744432] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:08.255 [2024-12-06 04:12:55.744441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:08.255 [2024-12-06 04:12:55.744454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:08.255 [2024-12-06 04:12:55.744462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:08.255 [2024-12-06 04:12:55.744471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:08.255 [2024-12-06 04:12:55.744478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:08.255 [2024-12-06 04:12:55.744487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:08.255 [2024-12-06 04:12:55.744494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:08.255 [2024-12-06 04:12:55.744504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:08.255 [2024-12-06 04:12:55.744511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:08.255 [2024-12-06 04:12:55.744520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:08.255 [2024-12-06 04:12:55.744527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:08.255 [2024-12-06 04:12:55.744545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:08.255 [2024-12-06 04:12:55.744553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:08.255 [2024-12-06 04:12:55.744561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:08.255 [2024-12-06 04:12:55.744569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:08.255 [2024-12-06 04:12:55.744578] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:08.255 [2024-12-06 04:12:55.744586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:08.255 [2024-12-06 04:12:55.744598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:08.255 [2024-12-06 04:12:55.744605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:08.255 [2024-12-06 04:12:55.744614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:08.255 [2024-12-06 04:12:55.744621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:08.255 [2024-12-06 04:12:55.744630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.255 [2024-12-06 04:12:55.744638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:08.255 [2024-12-06 04:12:55.744646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:22:08.255 [2024-12-06 04:12:55.744662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.255 [2024-12-06 04:12:55.774665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.255 [2024-12-06 04:12:55.774738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.255 [2024-12-06 04:12:55.774754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.924 ms 00:22:08.255 [2024-12-06 04:12:55.774766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.255 [2024-12-06 04:12:55.774896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.255 [2024-12-06 04:12:55.774908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:08.255 [2024-12-06 04:12:55.774920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:08.255 [2024-12-06 04:12:55.774929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.516 [2024-12-06 04:12:55.809169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.516 [2024-12-06 04:12:55.809216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:08.516 [2024-12-06 04:12:55.809231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.212 ms 00:22:08.516 [2024-12-06 04:12:55.809239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.516 [2024-12-06 04:12:55.809325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.516 [2024-12-06 04:12:55.809335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:08.516 [2024-12-06 04:12:55.809347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:08.516 [2024-12-06 04:12:55.809355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.516 [2024-12-06 04:12:55.809908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.516 [2024-12-06 04:12:55.809942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:08.516 [2024-12-06 04:12:55.809954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:22:08.516 [2024-12-06 04:12:55.809962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.516 [2024-12-06 04:12:55.810108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.516 [2024-12-06 04:12:55.810117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:08.516 [2024-12-06 04:12:55.810128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:22:08.516 [2024-12-06 04:12:55.810136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.516 [2024-12-06 04:12:55.827665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.516 [2024-12-06 04:12:55.827711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:08.516 [2024-12-06 04:12:55.827751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.503 ms 00:22:08.516 [2024-12-06 04:12:55.827759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.516 [2024-12-06 04:12:55.858851] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:08.516 [2024-12-06 04:12:55.858913] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:08.516 [2024-12-06 04:12:55.858931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:55.858941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:08.517 [2024-12-06 04:12:55.858954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.051 ms 00:22:08.517 [2024-12-06 04:12:55.858969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:55.885168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:55.885223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:08.517 [2024-12-06 04:12:55.885239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.095 ms 00:22:08.517 [2024-12-06 04:12:55.885248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:55.898160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:55.898206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:08.517 [2024-12-06 04:12:55.898224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.812 ms 00:22:08.517 [2024-12-06 04:12:55.898231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:55.910933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:55.910978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:08.517 [2024-12-06 04:12:55.910992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.608 ms 00:22:08.517 [2024-12-06 04:12:55.910999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:55.911674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:55.911708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:08.517 [2024-12-06 04:12:55.911737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:22:08.517 [2024-12-06 04:12:55.911745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:55.976987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:55.977055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:08.517 [2024-12-06 04:12:55.977073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.210 ms 00:22:08.517 [2024-12-06 04:12:55.977083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:55.988452] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:08.517 [2024-12-06 04:12:56.007626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:56.007691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:08.517 [2024-12-06 04:12:56.007708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.440 ms 00:22:08.517 [2024-12-06 04:12:56.007734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:56.007823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:56.007837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:08.517 [2024-12-06 04:12:56.007847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:08.517 [2024-12-06 04:12:56.007858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:56.007915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:56.007926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:08.517 [2024-12-06 04:12:56.007935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:08.517 [2024-12-06 04:12:56.007947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:56.007973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:56.007983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:08.517 [2024-12-06 04:12:56.007992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:08.517 [2024-12-06 04:12:56.008005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:56.008042] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:08.517 [2024-12-06 04:12:56.008057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:56.008068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:08.517 [2024-12-06 04:12:56.008077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:08.517 [2024-12-06 04:12:56.008085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:56.034121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:56.034176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:08.517 [2024-12-06 04:12:56.034192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.001 ms 00:22:08.517 [2024-12-06 04:12:56.034201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:56.034319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.517 [2024-12-06 04:12:56.034331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:08.517 [2024-12-06 04:12:56.034344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:08.517 [2024-12-06 04:12:56.034356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.517 [2024-12-06 04:12:56.036054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:08.517 [2024-12-06 04:12:56.039489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 321.785 ms, result 0 00:22:08.517 [2024-12-06 04:12:56.041896] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:08.779 Some configs were skipped because the RPC state that can call them passed over. 00:22:08.779 04:12:56 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:08.779 [2024-12-06 04:12:56.282812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.779 [2024-12-06 04:12:56.282896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:08.779 [2024-12-06 04:12:56.282912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.300 ms 00:22:08.779 [2024-12-06 04:12:56.282924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.779 [2024-12-06 04:12:56.282963] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.457 ms, result 0 00:22:08.779 true 00:22:08.779 04:12:56 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:09.040 [2024-12-06 04:12:56.498790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.040 [2024-12-06 04:12:56.498851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:09.040 [2024-12-06 04:12:56.498866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.996 ms 00:22:09.040 [2024-12-06 04:12:56.498874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.040 [2024-12-06 04:12:56.498915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.126 ms, result 0 00:22:09.040 true 00:22:09.040 04:12:56 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76964 00:22:09.040 04:12:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76964 ']' 00:22:09.040 04:12:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76964 00:22:09.040 04:12:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:09.040 04:12:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.040 04:12:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76964 00:22:09.040 killing process with pid 76964 00:22:09.040 04:12:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.040 04:12:56 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.040 04:12:56 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76964' 00:22:09.040 04:12:56 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76964 00:22:09.040 04:12:56 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76964 00:22:09.986 [2024-12-06 04:12:57.218008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.986 [2024-12-06 04:12:57.218056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:09.986 [2024-12-06 04:12:57.218067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:09.986 [2024-12-06 04:12:57.218074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.986 [2024-12-06 04:12:57.218093] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:09.986 [2024-12-06 04:12:57.220243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.986 [2024-12-06 04:12:57.220270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:09.986 [2024-12-06 04:12:57.220281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:22:09.986 [2024-12-06 04:12:57.220288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.986 [2024-12-06 04:12:57.220515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.986 [2024-12-06 04:12:57.220527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:09.986 [2024-12-06 04:12:57.220535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:22:09.986 [2024-12-06 04:12:57.220540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.986 [2024-12-06 04:12:57.223563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.986 [2024-12-06 04:12:57.223590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:09.986 [2024-12-06 04:12:57.223600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:22:09.986 [2024-12-06 04:12:57.223605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.986 [2024-12-06 04:12:57.228917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.986 [2024-12-06 04:12:57.228945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:09.986 [2024-12-06 04:12:57.228956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.237 ms 00:22:09.986 [2024-12-06 04:12:57.228962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.986 [2024-12-06 04:12:57.236508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.986 [2024-12-06 04:12:57.236540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:09.986 [2024-12-06 04:12:57.236550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.505 ms 00:22:09.986 [2024-12-06 04:12:57.236556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.986 [2024-12-06 04:12:57.243086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.986 [2024-12-06 04:12:57.243117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:09.986 [2024-12-06 04:12:57.243127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.499 ms 00:22:09.986 [2024-12-06 04:12:57.243134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.986 [2024-12-06 04:12:57.243245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.986 [2024-12-06 04:12:57.243253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:09.986 [2024-12-06 04:12:57.243261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:09.986 [2024-12-06 04:12:57.243266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.986 [2024-12-06 04:12:57.251110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.986 [2024-12-06 04:12:57.251135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:09.986 [2024-12-06 04:12:57.251143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.827 ms 00:22:09.986 [2024-12-06 04:12:57.251149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.986 [2024-12-06 04:12:57.258317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.986 [2024-12-06 04:12:57.258344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:09.986 [2024-12-06 04:12:57.258355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.138 ms 00:22:09.986 [2024-12-06 04:12:57.258361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.986 [2024-12-06 04:12:57.265430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.986 [2024-12-06 04:12:57.265456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:09.986 [2024-12-06 04:12:57.265464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.031 ms 00:22:09.986 [2024-12-06 04:12:57.265470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.986 [2024-12-06 04:12:57.272390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.986 [2024-12-06 04:12:57.272415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:09.986 [2024-12-06 04:12:57.272423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.871 ms 00:22:09.986 [2024-12-06 04:12:57.272429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.986 [2024-12-06 04:12:57.272455] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:09.986 [2024-12-06 04:12:57.272466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-12-06 04:12:57.272584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.272998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.273004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.273011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-12-06 04:12:57.273016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:09.988 [2024-12-06 04:12:57.273128] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:09.988 [2024-12-06 04:12:57.273138] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35e3cd2c-a5a2-441a-aebe-c05fd677fe36 00:22:09.988 [2024-12-06 04:12:57.273146] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:09.988 [2024-12-06 04:12:57.273152] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:09.988 [2024-12-06 04:12:57.273157] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:09.988 [2024-12-06 04:12:57.273165] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:09.988 [2024-12-06 04:12:57.273170] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:09.988 [2024-12-06 04:12:57.273178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:09.988 [2024-12-06 04:12:57.273183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:09.988 [2024-12-06 04:12:57.273189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:09.988 [2024-12-06 04:12:57.273194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:09.988 [2024-12-06 04:12:57.273200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.988 [2024-12-06 04:12:57.273206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:09.988 [2024-12-06 04:12:57.273213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:22:09.988 [2024-12-06 04:12:57.273219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.282846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.988 [2024-12-06 04:12:57.282871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:09.988 [2024-12-06 04:12:57.282882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.608 ms 00:22:09.988 [2024-12-06 04:12:57.282888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.283167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.988 [2024-12-06 04:12:57.283185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:09.988 [2024-12-06 04:12:57.283195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:22:09.988 [2024-12-06 04:12:57.283200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.317923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.988 [2024-12-06 04:12:57.317953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.988 [2024-12-06 04:12:57.317962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.988 [2024-12-06 04:12:57.317968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.318973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.988 [2024-12-06 04:12:57.319006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.988 [2024-12-06 04:12:57.319016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.988 [2024-12-06 04:12:57.319022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.319063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.988 [2024-12-06 04:12:57.319070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.988 [2024-12-06 04:12:57.319079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.988 [2024-12-06 04:12:57.319084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.319099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.988 [2024-12-06 04:12:57.319105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.988 [2024-12-06 04:12:57.319112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.988 [2024-12-06 04:12:57.319119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.378417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.988 [2024-12-06 04:12:57.378450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:09.988 [2024-12-06 04:12:57.378460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.988 [2024-12-06 04:12:57.378475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.427204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.988 [2024-12-06 04:12:57.427238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:09.988 [2024-12-06 04:12:57.427248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.988 [2024-12-06 04:12:57.427257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.427322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.988 [2024-12-06 04:12:57.427329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:09.988 [2024-12-06 04:12:57.427339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.988 [2024-12-06 04:12:57.427344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.427367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.988 [2024-12-06 04:12:57.427374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:09.988 [2024-12-06 04:12:57.427381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.988 [2024-12-06 04:12:57.427387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.427458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.988 [2024-12-06 04:12:57.427465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:09.988 [2024-12-06 04:12:57.427472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.988 [2024-12-06 04:12:57.427478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.427503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.988 [2024-12-06 04:12:57.427510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:09.988 [2024-12-06 04:12:57.427517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.988 [2024-12-06 04:12:57.427523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.988 [2024-12-06 04:12:57.427555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.988 [2024-12-06 04:12:57.427561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:09.989 [2024-12-06 04:12:57.427570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.989 [2024-12-06 04:12:57.427575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.989 [2024-12-06 04:12:57.427611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.989 [2024-12-06 04:12:57.427619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:09.989 [2024-12-06 04:12:57.427626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.989 [2024-12-06 04:12:57.427631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.989 [2024-12-06 04:12:57.427747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 209.711 ms, result 0 00:22:10.562 04:12:58 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:10.562 [2024-12-06 04:12:58.076823] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:22:10.562 [2024-12-06 04:12:58.076955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77018 ] 00:22:10.823 [2024-12-06 04:12:58.232938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.823 [2024-12-06 04:12:58.322385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.109 [2024-12-06 04:12:58.532823] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:11.109 [2024-12-06 04:12:58.532875] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:11.371 [2024-12-06 04:12:58.680652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.680694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:11.371 [2024-12-06 04:12:58.680704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:11.371 [2024-12-06 04:12:58.680711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.682840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.682871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.371 [2024-12-06 04:12:58.682878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:22:11.371 [2024-12-06 04:12:58.682884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.682943] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:11.371 [2024-12-06 04:12:58.683447] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:11.371 [2024-12-06 04:12:58.683469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.683475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.371 [2024-12-06 04:12:58.683482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:22:11.371 [2024-12-06 04:12:58.683488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.684500] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:11.371 [2024-12-06 04:12:58.694073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.694103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:11.371 [2024-12-06 04:12:58.694111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.575 ms 00:22:11.371 [2024-12-06 04:12:58.694118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.694183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.694192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:11.371 [2024-12-06 04:12:58.694198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:11.371 [2024-12-06 04:12:58.694204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.698450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.698487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.371 [2024-12-06 04:12:58.698494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.218 ms 00:22:11.371 [2024-12-06 04:12:58.698500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.698575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.698583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.371 [2024-12-06 04:12:58.698589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:11.371 [2024-12-06 04:12:58.698594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.698612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.698619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:11.371 [2024-12-06 04:12:58.698625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:11.371 [2024-12-06 04:12:58.698630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.698647] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:11.371 [2024-12-06 04:12:58.701298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.701322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.371 [2024-12-06 04:12:58.701329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.655 ms 00:22:11.371 [2024-12-06 04:12:58.701335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.701364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.701371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:11.371 [2024-12-06 04:12:58.701378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:11.371 [2024-12-06 04:12:58.701383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.701398] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:11.371 [2024-12-06 04:12:58.701414] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:11.371 [2024-12-06 04:12:58.701440] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:11.371 [2024-12-06 04:12:58.701451] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:11.371 [2024-12-06 04:12:58.701529] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:11.371 [2024-12-06 04:12:58.701537] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:11.371 [2024-12-06 04:12:58.701545] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:11.371 [2024-12-06 04:12:58.701555] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:11.371 [2024-12-06 04:12:58.701563] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:11.371 [2024-12-06 04:12:58.701569] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:11.371 [2024-12-06 04:12:58.701576] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:11.371 [2024-12-06 04:12:58.701582] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:11.371 [2024-12-06 04:12:58.701587] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:11.371 [2024-12-06 04:12:58.701593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.701599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:11.371 [2024-12-06 04:12:58.701605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:22:11.371 [2024-12-06 04:12:58.701610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.701677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.701685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:11.371 [2024-12-06 04:12:58.701691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:11.371 [2024-12-06 04:12:58.701696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.701784] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:11.371 [2024-12-06 04:12:58.701798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:11.371 [2024-12-06 04:12:58.701805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.371 [2024-12-06 04:12:58.701811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.371 [2024-12-06 04:12:58.701817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:11.371 [2024-12-06 04:12:58.701823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:11.371 [2024-12-06 04:12:58.701828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:11.371 [2024-12-06 04:12:58.701833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:11.371 [2024-12-06 04:12:58.701839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:11.371 [2024-12-06 04:12:58.701844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.371 [2024-12-06 04:12:58.701849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:11.371 [2024-12-06 04:12:58.701859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:11.371 [2024-12-06 04:12:58.701863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.371 [2024-12-06 04:12:58.701869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:11.371 [2024-12-06 04:12:58.701874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:11.371 [2024-12-06 04:12:58.701880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.371 [2024-12-06 04:12:58.701886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:11.371 [2024-12-06 04:12:58.701891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:11.371 [2024-12-06 04:12:58.701896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.371 [2024-12-06 04:12:58.701902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:11.371 [2024-12-06 04:12:58.701907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:11.371 [2024-12-06 04:12:58.701912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.371 [2024-12-06 04:12:58.701917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:11.371 [2024-12-06 04:12:58.701921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:11.371 [2024-12-06 04:12:58.701926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.371 [2024-12-06 04:12:58.701931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:11.371 [2024-12-06 04:12:58.701936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:11.371 [2024-12-06 04:12:58.701941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.371 [2024-12-06 04:12:58.701946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:11.371 [2024-12-06 04:12:58.701951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:11.371 [2024-12-06 04:12:58.701955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.371 [2024-12-06 04:12:58.701960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:11.371 [2024-12-06 04:12:58.701965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:11.371 [2024-12-06 04:12:58.701970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.371 [2024-12-06 04:12:58.701975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:11.371 [2024-12-06 04:12:58.701980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:11.371 [2024-12-06 04:12:58.701985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.371 [2024-12-06 04:12:58.701990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:11.371 [2024-12-06 04:12:58.701995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:11.371 [2024-12-06 04:12:58.702000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.371 [2024-12-06 04:12:58.702005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:11.371 [2024-12-06 04:12:58.702010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:11.371 [2024-12-06 04:12:58.702015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.371 [2024-12-06 04:12:58.702020] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:11.371 [2024-12-06 04:12:58.702026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:11.371 [2024-12-06 04:12:58.702033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.371 [2024-12-06 04:12:58.702039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.371 [2024-12-06 04:12:58.702045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:11.371 [2024-12-06 04:12:58.702051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:11.371 [2024-12-06 04:12:58.702056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:11.371 [2024-12-06 04:12:58.702061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:11.371 [2024-12-06 04:12:58.702066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:11.371 [2024-12-06 04:12:58.702071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:11.371 [2024-12-06 04:12:58.702077] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:11.371 [2024-12-06 04:12:58.702083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.371 [2024-12-06 04:12:58.702089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:11.371 [2024-12-06 04:12:58.702095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:11.371 [2024-12-06 04:12:58.702100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:11.371 [2024-12-06 04:12:58.702106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:11.371 [2024-12-06 04:12:58.702111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:11.371 [2024-12-06 04:12:58.702116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:11.371 [2024-12-06 04:12:58.702121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:11.371 [2024-12-06 04:12:58.702126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:11.371 [2024-12-06 04:12:58.702131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:11.371 [2024-12-06 04:12:58.702136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:11.371 [2024-12-06 04:12:58.702142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:11.371 [2024-12-06 04:12:58.702147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:11.371 [2024-12-06 04:12:58.702153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:11.371 [2024-12-06 04:12:58.702158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:11.371 [2024-12-06 04:12:58.702163] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:11.371 [2024-12-06 04:12:58.702169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.371 [2024-12-06 04:12:58.702175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:11.371 [2024-12-06 04:12:58.702181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:11.371 [2024-12-06 04:12:58.702187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:11.371 [2024-12-06 04:12:58.702192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:11.371 [2024-12-06 04:12:58.702198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.702205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:11.371 [2024-12-06 04:12:58.702212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:22:11.371 [2024-12-06 04:12:58.702217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.723023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.723054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.371 [2024-12-06 04:12:58.723062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.765 ms 00:22:11.371 [2024-12-06 04:12:58.723070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.723169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.723176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:11.371 [2024-12-06 04:12:58.723183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:11.371 [2024-12-06 04:12:58.723190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.761447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.761483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.371 [2024-12-06 04:12:58.761493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.241 ms 00:22:11.371 [2024-12-06 04:12:58.761499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.761559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.761568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.371 [2024-12-06 04:12:58.761575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:11.371 [2024-12-06 04:12:58.761581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.761888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.371 [2024-12-06 04:12:58.761905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.371 [2024-12-06 04:12:58.761916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:22:11.371 [2024-12-06 04:12:58.761922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.371 [2024-12-06 04:12:58.762026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.762040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.372 [2024-12-06 04:12:58.762047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:11.372 [2024-12-06 04:12:58.762053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.372 [2024-12-06 04:12:58.772694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.772739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.372 [2024-12-06 04:12:58.772747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.626 ms 00:22:11.372 [2024-12-06 04:12:58.772753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.372 [2024-12-06 04:12:58.782239] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:11.372 [2024-12-06 04:12:58.782267] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:11.372 [2024-12-06 04:12:58.782276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.782283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:11.372 [2024-12-06 04:12:58.782290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.434 ms 00:22:11.372 [2024-12-06 04:12:58.782295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.372 [2024-12-06 04:12:58.800704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.800739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:11.372 [2024-12-06 04:12:58.800748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.361 ms 00:22:11.372 [2024-12-06 04:12:58.800755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.372 [2024-12-06 04:12:58.809683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.809711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:11.372 [2024-12-06 04:12:58.809725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.870 ms 00:22:11.372 [2024-12-06 04:12:58.809731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.372 [2024-12-06 04:12:58.818252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.818278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:11.372 [2024-12-06 04:12:58.818286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.481 ms 00:22:11.372 [2024-12-06 04:12:58.818291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.372 [2024-12-06 04:12:58.818780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.818802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:11.372 [2024-12-06 04:12:58.818809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:22:11.372 [2024-12-06 04:12:58.818815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.372 [2024-12-06 04:12:58.861859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.861899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:11.372 [2024-12-06 04:12:58.861910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.025 ms 00:22:11.372 [2024-12-06 04:12:58.861918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.372 [2024-12-06 04:12:58.869820] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:11.372 [2024-12-06 04:12:58.881286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.881319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:11.372 [2024-12-06 04:12:58.881333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.299 ms 00:22:11.372 [2024-12-06 04:12:58.881339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.372 [2024-12-06 04:12:58.881414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.881422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:11.372 [2024-12-06 04:12:58.881429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:11.372 [2024-12-06 04:12:58.881435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.372 [2024-12-06 04:12:58.881471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.881478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:11.372 [2024-12-06 04:12:58.881487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:11.372 [2024-12-06 04:12:58.881495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.372 [2024-12-06 04:12:58.881517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.881524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:11.372 [2024-12-06 04:12:58.881530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:11.372 [2024-12-06 04:12:58.881536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.372 [2024-12-06 04:12:58.881559] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:11.372 [2024-12-06 04:12:58.881567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.372 [2024-12-06 04:12:58.881573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:11.372 [2024-12-06 04:12:58.881579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:11.372 [2024-12-06 04:12:58.881585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.632 [2024-12-06 04:12:58.899496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.632 [2024-12-06 04:12:58.899527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:11.632 [2024-12-06 04:12:58.899536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.895 ms 00:22:11.632 [2024-12-06 04:12:58.899542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.632 [2024-12-06 04:12:58.899611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.632 [2024-12-06 04:12:58.899619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:11.632 [2024-12-06 04:12:58.899625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:11.632 [2024-12-06 04:12:58.899635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.632 [2024-12-06 04:12:58.900622] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:11.632 [2024-12-06 04:12:58.903053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 219.745 ms, result 0 00:22:11.632 [2024-12-06 04:12:58.904709] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:11.632 [2024-12-06 04:12:58.927421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:12.569  [2024-12-06T04:13:01.031Z] Copying: 15/256 [MB] (15 MBps) [2024-12-06T04:13:02.399Z] Copying: 31/256 [MB] (15 MBps) [2024-12-06T04:13:03.332Z] Copying: 70/256 [MB] (38 MBps) [2024-12-06T04:13:04.266Z] Copying: 100/256 [MB] (30 MBps) [2024-12-06T04:13:05.205Z] Copying: 113/256 [MB] (12 MBps) [2024-12-06T04:13:06.143Z] Copying: 124/256 [MB] (11 MBps) [2024-12-06T04:13:07.084Z] Copying: 135/256 [MB] (10 MBps) [2024-12-06T04:13:08.018Z] Copying: 150/256 [MB] (15 MBps) [2024-12-06T04:13:09.419Z] Copying: 164/256 [MB] (14 MBps) [2024-12-06T04:13:09.987Z] Copying: 188/256 [MB] (23 MBps) [2024-12-06T04:13:11.359Z] Copying: 230/256 [MB] (41 MBps) [2024-12-06T04:13:11.617Z] Copying: 249/256 [MB] (19 MBps) [2024-12-06T04:13:11.875Z] Copying: 256/256 [MB] (average 20 MBps)[2024-12-06 04:13:11.718980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:24.348 [2024-12-06 04:13:11.735188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.348 [2024-12-06 04:13:11.735230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:24.348 [2024-12-06 04:13:11.735251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:24.348 [2024-12-06 04:13:11.735260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.348 [2024-12-06 04:13:11.735283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:24.348 [2024-12-06 04:13:11.737832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.348 [2024-12-06 04:13:11.737862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:24.348 [2024-12-06 04:13:11.737873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.535 ms 00:22:24.348 [2024-12-06 04:13:11.737881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.348 [2024-12-06 04:13:11.738143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.348 [2024-12-06 04:13:11.738159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:24.348 [2024-12-06 04:13:11.738168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:22:24.348 [2024-12-06 04:13:11.738175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.348 [2024-12-06 04:13:11.741859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.348 [2024-12-06 04:13:11.741881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:24.348 [2024-12-06 04:13:11.741891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.666 ms 00:22:24.348 [2024-12-06 04:13:11.741900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.348 [2024-12-06 04:13:11.748812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.348 [2024-12-06 04:13:11.748842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:24.348 [2024-12-06 04:13:11.748852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.895 ms 00:22:24.348 [2024-12-06 04:13:11.748860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.348 [2024-12-06 04:13:11.771565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.348 [2024-12-06 04:13:11.771601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:24.348 [2024-12-06 04:13:11.771612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.650 ms 00:22:24.348 [2024-12-06 04:13:11.771619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.348 [2024-12-06 04:13:11.785200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.348 [2024-12-06 04:13:11.785236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:24.348 [2024-12-06 04:13:11.785247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.558 ms 00:22:24.348 [2024-12-06 04:13:11.785254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.348 [2024-12-06 04:13:11.785390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.348 [2024-12-06 04:13:11.785400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:24.348 [2024-12-06 04:13:11.785415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:24.348 [2024-12-06 04:13:11.785423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.348 [2024-12-06 04:13:11.808468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.348 [2024-12-06 04:13:11.808500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:24.348 [2024-12-06 04:13:11.808510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.029 ms 00:22:24.348 [2024-12-06 04:13:11.808518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.348 [2024-12-06 04:13:11.830945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.348 [2024-12-06 04:13:11.830976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:24.348 [2024-12-06 04:13:11.830986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.407 ms 00:22:24.348 [2024-12-06 04:13:11.830993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.348 [2024-12-06 04:13:11.853420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.348 [2024-12-06 04:13:11.853450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:24.348 [2024-12-06 04:13:11.853460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.407 ms 00:22:24.348 [2024-12-06 04:13:11.853467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.607 [2024-12-06 04:13:11.875807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.607 [2024-12-06 04:13:11.875837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:24.607 [2024-12-06 04:13:11.875846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.294 ms 00:22:24.607 [2024-12-06 04:13:11.875853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.607 [2024-12-06 04:13:11.875872] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:24.607 [2024-12-06 04:13:11.875885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.875992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.876000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.876007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.876014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:24.607 [2024-12-06 04:13:11.876021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:24.608 [2024-12-06 04:13:11.876646] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:24.608 [2024-12-06 04:13:11.876654] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35e3cd2c-a5a2-441a-aebe-c05fd677fe36 00:22:24.608 [2024-12-06 04:13:11.876661] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:24.608 [2024-12-06 04:13:11.876668] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:24.608 [2024-12-06 04:13:11.876675] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:24.608 [2024-12-06 04:13:11.876683] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:24.608 [2024-12-06 04:13:11.876690] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:24.608 [2024-12-06 04:13:11.876699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:24.608 [2024-12-06 04:13:11.876706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:24.609 [2024-12-06 04:13:11.876712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:24.609 [2024-12-06 04:13:11.876734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:24.609 [2024-12-06 04:13:11.876741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.609 [2024-12-06 04:13:11.876748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:24.609 [2024-12-06 04:13:11.876757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:22:24.609 [2024-12-06 04:13:11.876764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:11.888997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.609 [2024-12-06 04:13:11.889028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:24.609 [2024-12-06 04:13:11.889039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.216 ms 00:22:24.609 [2024-12-06 04:13:11.889050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:11.889388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.609 [2024-12-06 04:13:11.889408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:24.609 [2024-12-06 04:13:11.889417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:22:24.609 [2024-12-06 04:13:11.889424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:11.924027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.609 [2024-12-06 04:13:11.924061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:24.609 [2024-12-06 04:13:11.924074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.609 [2024-12-06 04:13:11.924081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:11.924160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.609 [2024-12-06 04:13:11.924170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:24.609 [2024-12-06 04:13:11.924177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.609 [2024-12-06 04:13:11.924184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:11.924226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.609 [2024-12-06 04:13:11.924235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:24.609 [2024-12-06 04:13:11.924243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.609 [2024-12-06 04:13:11.924250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:11.924269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.609 [2024-12-06 04:13:11.924276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:24.609 [2024-12-06 04:13:11.924284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.609 [2024-12-06 04:13:11.924291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:12.000866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.609 [2024-12-06 04:13:12.000905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:24.609 [2024-12-06 04:13:12.000915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.609 [2024-12-06 04:13:12.000927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:12.063856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.609 [2024-12-06 04:13:12.063895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:24.609 [2024-12-06 04:13:12.063905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.609 [2024-12-06 04:13:12.063913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:12.063971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.609 [2024-12-06 04:13:12.063980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:24.609 [2024-12-06 04:13:12.063987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.609 [2024-12-06 04:13:12.063995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:12.064023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.609 [2024-12-06 04:13:12.064031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:24.609 [2024-12-06 04:13:12.064039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.609 [2024-12-06 04:13:12.064046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:12.064128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.609 [2024-12-06 04:13:12.064137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:24.609 [2024-12-06 04:13:12.064145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.609 [2024-12-06 04:13:12.064152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:12.064181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.609 [2024-12-06 04:13:12.064192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:24.609 [2024-12-06 04:13:12.064200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.609 [2024-12-06 04:13:12.064207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:12.064241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.609 [2024-12-06 04:13:12.064250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:24.609 [2024-12-06 04:13:12.064257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.609 [2024-12-06 04:13:12.064264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:12.064305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:24.609 [2024-12-06 04:13:12.064315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:24.609 [2024-12-06 04:13:12.064323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:24.609 [2024-12-06 04:13:12.064330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.609 [2024-12-06 04:13:12.064455] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.266 ms, result 0 00:22:25.543 00:22:25.543 00:22:25.543 04:13:12 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:25.803 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:25.803 04:13:13 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:25.803 04:13:13 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:25.803 04:13:13 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:25.803 04:13:13 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:25.803 04:13:13 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:25.803 04:13:13 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:26.063 04:13:13 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76964 00:22:26.063 04:13:13 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76964 ']' 00:22:26.063 04:13:13 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76964 00:22:26.063 Process with pid 76964 is not found 00:22:26.063 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76964) - No such process 00:22:26.063 04:13:13 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76964 is not found' 00:22:26.063 ************************************ 00:22:26.063 END TEST ftl_trim 00:22:26.063 ************************************ 00:22:26.063 00:22:26.063 real 1m26.671s 00:22:26.063 user 1m42.608s 00:22:26.063 sys 0m16.104s 00:22:26.063 04:13:13 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.063 04:13:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:26.063 04:13:13 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:26.063 04:13:13 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:26.063 04:13:13 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.063 04:13:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:26.063 ************************************ 00:22:26.063 START TEST ftl_restore 00:22:26.063 ************************************ 00:22:26.063 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:26.063 * Looking for test storage... 00:22:26.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:26.063 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:26.063 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:22:26.063 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:26.063 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:22:26.063 04:13:13 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.064 04:13:13 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:22:26.064 04:13:13 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:22:26.064 04:13:13 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.064 04:13:13 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:22:26.064 04:13:13 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.064 04:13:13 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.064 04:13:13 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.064 04:13:13 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:22:26.064 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.064 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:26.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.064 --rc genhtml_branch_coverage=1 00:22:26.064 --rc genhtml_function_coverage=1 00:22:26.064 --rc genhtml_legend=1 00:22:26.064 --rc geninfo_all_blocks=1 00:22:26.064 --rc geninfo_unexecuted_blocks=1 00:22:26.064 00:22:26.064 ' 00:22:26.064 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:26.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.064 --rc genhtml_branch_coverage=1 00:22:26.064 --rc genhtml_function_coverage=1 00:22:26.064 --rc genhtml_legend=1 00:22:26.064 --rc geninfo_all_blocks=1 00:22:26.064 --rc geninfo_unexecuted_blocks=1 00:22:26.064 00:22:26.064 ' 00:22:26.064 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:26.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.064 --rc genhtml_branch_coverage=1 00:22:26.064 --rc genhtml_function_coverage=1 00:22:26.064 --rc genhtml_legend=1 00:22:26.064 --rc geninfo_all_blocks=1 00:22:26.064 --rc geninfo_unexecuted_blocks=1 00:22:26.064 00:22:26.064 ' 00:22:26.064 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:26.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.064 --rc genhtml_branch_coverage=1 00:22:26.064 --rc genhtml_function_coverage=1 00:22:26.064 --rc genhtml_legend=1 00:22:26.064 --rc geninfo_all_blocks=1 00:22:26.064 --rc geninfo_unexecuted_blocks=1 00:22:26.064 00:22:26.064 ' 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.RNyHbuCoe0 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77247 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77247 00:22:26.064 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77247 ']' 00:22:26.064 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.064 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.064 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.064 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.064 04:13:13 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:26.064 04:13:13 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:26.323 [2024-12-06 04:13:13.634364] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:22:26.323 [2024-12-06 04:13:13.634489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77247 ] 00:22:26.323 [2024-12-06 04:13:13.792478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.582 [2024-12-06 04:13:13.885577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.149 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.149 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:22:27.149 04:13:14 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:27.149 04:13:14 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:27.149 04:13:14 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:27.149 04:13:14 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:27.149 04:13:14 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:27.149 04:13:14 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:27.408 04:13:14 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:27.408 04:13:14 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:27.408 04:13:14 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:27.408 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:27.408 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:27.408 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:27.408 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:27.408 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:27.408 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:27.408 { 00:22:27.408 "name": "nvme0n1", 00:22:27.408 "aliases": [ 00:22:27.408 "3a860bd3-53de-40c2-b5af-f3d46f18f383" 00:22:27.408 ], 00:22:27.408 "product_name": "NVMe disk", 00:22:27.408 "block_size": 4096, 00:22:27.408 "num_blocks": 1310720, 00:22:27.408 "uuid": "3a860bd3-53de-40c2-b5af-f3d46f18f383", 00:22:27.408 "numa_id": -1, 00:22:27.408 "assigned_rate_limits": { 00:22:27.408 "rw_ios_per_sec": 0, 00:22:27.408 "rw_mbytes_per_sec": 0, 00:22:27.408 "r_mbytes_per_sec": 0, 00:22:27.408 "w_mbytes_per_sec": 0 00:22:27.408 }, 00:22:27.408 "claimed": true, 00:22:27.408 "claim_type": "read_many_write_one", 00:22:27.408 "zoned": false, 00:22:27.408 "supported_io_types": { 00:22:27.408 "read": true, 00:22:27.408 "write": true, 00:22:27.408 "unmap": true, 00:22:27.408 "flush": true, 00:22:27.408 "reset": true, 00:22:27.408 "nvme_admin": true, 00:22:27.408 "nvme_io": true, 00:22:27.408 "nvme_io_md": false, 00:22:27.408 "write_zeroes": true, 00:22:27.408 "zcopy": false, 00:22:27.408 "get_zone_info": false, 00:22:27.408 "zone_management": false, 00:22:27.408 "zone_append": false, 00:22:27.408 "compare": true, 00:22:27.408 "compare_and_write": false, 00:22:27.408 "abort": true, 00:22:27.408 "seek_hole": false, 00:22:27.408 "seek_data": false, 00:22:27.408 "copy": true, 00:22:27.408 "nvme_iov_md": false 00:22:27.408 }, 00:22:27.408 "driver_specific": { 00:22:27.408 "nvme": [ 00:22:27.408 { 00:22:27.408 "pci_address": "0000:00:11.0", 00:22:27.408 "trid": { 00:22:27.408 "trtype": "PCIe", 00:22:27.408 "traddr": "0000:00:11.0" 00:22:27.408 }, 00:22:27.408 "ctrlr_data": { 00:22:27.408 "cntlid": 0, 00:22:27.408 "vendor_id": "0x1b36", 00:22:27.408 "model_number": "QEMU NVMe Ctrl", 00:22:27.408 "serial_number": "12341", 00:22:27.408 "firmware_revision": "8.0.0", 00:22:27.408 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:27.408 "oacs": { 00:22:27.408 "security": 0, 00:22:27.408 "format": 1, 00:22:27.408 "firmware": 0, 00:22:27.408 "ns_manage": 1 00:22:27.408 }, 00:22:27.408 "multi_ctrlr": false, 00:22:27.408 "ana_reporting": false 00:22:27.408 }, 00:22:27.408 "vs": { 00:22:27.408 "nvme_version": "1.4" 00:22:27.408 }, 00:22:27.408 "ns_data": { 00:22:27.408 "id": 1, 00:22:27.408 "can_share": false 00:22:27.408 } 00:22:27.408 } 00:22:27.408 ], 00:22:27.408 "mp_policy": "active_passive" 00:22:27.408 } 00:22:27.408 } 00:22:27.408 ]' 00:22:27.408 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:27.667 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:27.667 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:27.667 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:27.667 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:27.667 04:13:14 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:22:27.667 04:13:14 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:27.667 04:13:14 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:27.667 04:13:14 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:27.667 04:13:14 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:27.667 04:13:14 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:27.667 04:13:15 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=db32acbf-b677-4b15-bfaa-0b5275d12596 00:22:27.667 04:13:15 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:27.667 04:13:15 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db32acbf-b677-4b15-bfaa-0b5275d12596 00:22:27.924 04:13:15 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:28.182 04:13:15 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=3eb5675c-060f-4f35-a711-83f472a187a4 00:22:28.182 04:13:15 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3eb5675c-060f-4f35-a711-83f472a187a4 00:22:28.439 04:13:15 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a 00:22:28.439 04:13:15 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:28.439 04:13:15 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a 00:22:28.439 04:13:15 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:28.439 04:13:15 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:28.439 04:13:15 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a 00:22:28.439 04:13:15 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:28.439 04:13:15 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a 00:22:28.439 04:13:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a 00:22:28.439 04:13:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:28.439 04:13:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:28.439 04:13:15 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:28.439 04:13:15 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a 00:22:28.698 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:28.698 { 00:22:28.698 "name": "fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a", 00:22:28.698 "aliases": [ 00:22:28.698 "lvs/nvme0n1p0" 00:22:28.698 ], 00:22:28.698 "product_name": "Logical Volume", 00:22:28.698 "block_size": 4096, 00:22:28.698 "num_blocks": 26476544, 00:22:28.698 "uuid": "fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a", 00:22:28.698 "assigned_rate_limits": { 00:22:28.698 "rw_ios_per_sec": 0, 00:22:28.698 "rw_mbytes_per_sec": 0, 00:22:28.698 "r_mbytes_per_sec": 0, 00:22:28.698 "w_mbytes_per_sec": 0 00:22:28.698 }, 00:22:28.698 "claimed": false, 00:22:28.698 "zoned": false, 00:22:28.698 "supported_io_types": { 00:22:28.698 "read": true, 00:22:28.698 "write": true, 00:22:28.698 "unmap": true, 00:22:28.698 "flush": false, 00:22:28.698 "reset": true, 00:22:28.698 "nvme_admin": false, 00:22:28.698 "nvme_io": false, 00:22:28.698 "nvme_io_md": false, 00:22:28.698 "write_zeroes": true, 00:22:28.698 "zcopy": false, 00:22:28.698 "get_zone_info": false, 00:22:28.698 "zone_management": false, 00:22:28.698 "zone_append": false, 00:22:28.698 "compare": false, 00:22:28.698 "compare_and_write": false, 00:22:28.698 "abort": false, 00:22:28.698 "seek_hole": true, 00:22:28.698 "seek_data": true, 00:22:28.698 "copy": false, 00:22:28.698 "nvme_iov_md": false 00:22:28.698 }, 00:22:28.698 "driver_specific": { 00:22:28.698 "lvol": { 00:22:28.698 "lvol_store_uuid": "3eb5675c-060f-4f35-a711-83f472a187a4", 00:22:28.698 "base_bdev": "nvme0n1", 00:22:28.698 "thin_provision": true, 00:22:28.698 "num_allocated_clusters": 0, 00:22:28.698 "snapshot": false, 00:22:28.698 "clone": false, 00:22:28.698 "esnap_clone": false 00:22:28.698 } 00:22:28.698 } 00:22:28.698 } 00:22:28.698 ]' 00:22:28.698 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:28.698 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:28.698 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:28.698 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:28.698 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:28.698 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:28.698 04:13:16 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:28.698 04:13:16 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:28.698 04:13:16 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:28.957 04:13:16 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:28.957 04:13:16 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:28.957 04:13:16 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a 00:22:28.957 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a 00:22:28.957 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:28.957 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:28.957 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:28.957 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a 00:22:29.216 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:29.216 { 00:22:29.216 "name": "fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a", 00:22:29.216 "aliases": [ 00:22:29.216 "lvs/nvme0n1p0" 00:22:29.216 ], 00:22:29.216 "product_name": "Logical Volume", 00:22:29.216 "block_size": 4096, 00:22:29.216 "num_blocks": 26476544, 00:22:29.216 "uuid": "fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a", 00:22:29.216 "assigned_rate_limits": { 00:22:29.216 "rw_ios_per_sec": 0, 00:22:29.216 "rw_mbytes_per_sec": 0, 00:22:29.216 "r_mbytes_per_sec": 0, 00:22:29.216 "w_mbytes_per_sec": 0 00:22:29.216 }, 00:22:29.216 "claimed": false, 00:22:29.216 "zoned": false, 00:22:29.216 "supported_io_types": { 00:22:29.216 "read": true, 00:22:29.216 "write": true, 00:22:29.216 "unmap": true, 00:22:29.216 "flush": false, 00:22:29.216 "reset": true, 00:22:29.216 "nvme_admin": false, 00:22:29.216 "nvme_io": false, 00:22:29.216 "nvme_io_md": false, 00:22:29.217 "write_zeroes": true, 00:22:29.217 "zcopy": false, 00:22:29.217 "get_zone_info": false, 00:22:29.217 "zone_management": false, 00:22:29.217 "zone_append": false, 00:22:29.217 "compare": false, 00:22:29.217 "compare_and_write": false, 00:22:29.217 "abort": false, 00:22:29.217 "seek_hole": true, 00:22:29.217 "seek_data": true, 00:22:29.217 "copy": false, 00:22:29.217 "nvme_iov_md": false 00:22:29.217 }, 00:22:29.217 "driver_specific": { 00:22:29.217 "lvol": { 00:22:29.217 "lvol_store_uuid": "3eb5675c-060f-4f35-a711-83f472a187a4", 00:22:29.217 "base_bdev": "nvme0n1", 00:22:29.217 "thin_provision": true, 00:22:29.217 "num_allocated_clusters": 0, 00:22:29.217 "snapshot": false, 00:22:29.217 "clone": false, 00:22:29.217 "esnap_clone": false 00:22:29.217 } 00:22:29.217 } 00:22:29.217 } 00:22:29.217 ]' 00:22:29.217 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:29.217 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:29.217 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:29.217 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:29.217 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:29.217 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:29.217 04:13:16 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:29.217 04:13:16 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:29.475 04:13:16 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:29.475 04:13:16 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a 00:22:29.475 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a 00:22:29.475 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:29.475 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:29.475 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:29.475 04:13:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a 00:22:29.734 04:13:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:29.734 { 00:22:29.734 "name": "fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a", 00:22:29.734 "aliases": [ 00:22:29.734 "lvs/nvme0n1p0" 00:22:29.734 ], 00:22:29.734 "product_name": "Logical Volume", 00:22:29.734 "block_size": 4096, 00:22:29.734 "num_blocks": 26476544, 00:22:29.734 "uuid": "fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a", 00:22:29.734 "assigned_rate_limits": { 00:22:29.734 "rw_ios_per_sec": 0, 00:22:29.734 "rw_mbytes_per_sec": 0, 00:22:29.734 "r_mbytes_per_sec": 0, 00:22:29.734 "w_mbytes_per_sec": 0 00:22:29.734 }, 00:22:29.734 "claimed": false, 00:22:29.734 "zoned": false, 00:22:29.734 "supported_io_types": { 00:22:29.734 "read": true, 00:22:29.734 "write": true, 00:22:29.734 "unmap": true, 00:22:29.734 "flush": false, 00:22:29.734 "reset": true, 00:22:29.734 "nvme_admin": false, 00:22:29.734 "nvme_io": false, 00:22:29.734 "nvme_io_md": false, 00:22:29.734 "write_zeroes": true, 00:22:29.734 "zcopy": false, 00:22:29.734 "get_zone_info": false, 00:22:29.734 "zone_management": false, 00:22:29.734 "zone_append": false, 00:22:29.734 "compare": false, 00:22:29.734 "compare_and_write": false, 00:22:29.734 "abort": false, 00:22:29.734 "seek_hole": true, 00:22:29.734 "seek_data": true, 00:22:29.734 "copy": false, 00:22:29.734 "nvme_iov_md": false 00:22:29.734 }, 00:22:29.734 "driver_specific": { 00:22:29.734 "lvol": { 00:22:29.735 "lvol_store_uuid": "3eb5675c-060f-4f35-a711-83f472a187a4", 00:22:29.735 "base_bdev": "nvme0n1", 00:22:29.735 "thin_provision": true, 00:22:29.735 "num_allocated_clusters": 0, 00:22:29.735 "snapshot": false, 00:22:29.735 "clone": false, 00:22:29.735 "esnap_clone": false 00:22:29.735 } 00:22:29.735 } 00:22:29.735 } 00:22:29.735 ]' 00:22:29.735 04:13:17 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:29.735 04:13:17 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:29.735 04:13:17 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:29.735 04:13:17 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:29.735 04:13:17 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:29.735 04:13:17 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:29.735 04:13:17 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:29.735 04:13:17 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a --l2p_dram_limit 10' 00:22:29.735 04:13:17 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:29.735 04:13:17 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:29.735 04:13:17 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:29.735 04:13:17 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:29.735 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:29.735 04:13:17 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fc9bf637-d4ec-4b5d-afe1-ba22cff2eb8a --l2p_dram_limit 10 -c nvc0n1p0 00:22:29.995 [2024-12-06 04:13:17.270308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.995 [2024-12-06 04:13:17.270345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:29.995 [2024-12-06 04:13:17.270359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:29.995 [2024-12-06 04:13:17.270366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.995 [2024-12-06 04:13:17.270412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.995 [2024-12-06 04:13:17.270419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.995 [2024-12-06 04:13:17.270427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:29.995 [2024-12-06 04:13:17.270433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.995 [2024-12-06 04:13:17.270458] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:29.995 [2024-12-06 04:13:17.271051] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:29.995 [2024-12-06 04:13:17.271139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.995 [2024-12-06 04:13:17.271147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.995 [2024-12-06 04:13:17.271156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:22:29.995 [2024-12-06 04:13:17.271161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.995 [2024-12-06 04:13:17.271216] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3668eabb-1b54-40a7-857e-301a1d6d2e94 00:22:29.995 [2024-12-06 04:13:17.272144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.995 [2024-12-06 04:13:17.272171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:29.995 [2024-12-06 04:13:17.272179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:29.995 [2024-12-06 04:13:17.272186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.995 [2024-12-06 04:13:17.276763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.995 [2024-12-06 04:13:17.276789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.995 [2024-12-06 04:13:17.276796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.547 ms 00:22:29.995 [2024-12-06 04:13:17.276803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.995 [2024-12-06 04:13:17.276869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.995 [2024-12-06 04:13:17.276878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.995 [2024-12-06 04:13:17.276884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:29.995 [2024-12-06 04:13:17.276894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.995 [2024-12-06 04:13:17.276931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.995 [2024-12-06 04:13:17.276940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:29.995 [2024-12-06 04:13:17.276947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:29.995 [2024-12-06 04:13:17.276954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.995 [2024-12-06 04:13:17.276971] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:29.995 [2024-12-06 04:13:17.279795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.995 [2024-12-06 04:13:17.279817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.995 [2024-12-06 04:13:17.279826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.826 ms 00:22:29.995 [2024-12-06 04:13:17.279832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.995 [2024-12-06 04:13:17.279859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.995 [2024-12-06 04:13:17.279865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:29.995 [2024-12-06 04:13:17.279873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:29.995 [2024-12-06 04:13:17.279878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.995 [2024-12-06 04:13:17.279892] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:29.995 [2024-12-06 04:13:17.280001] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:29.995 [2024-12-06 04:13:17.280013] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:29.995 [2024-12-06 04:13:17.280021] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:29.995 [2024-12-06 04:13:17.280031] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:29.995 [2024-12-06 04:13:17.280037] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:29.995 [2024-12-06 04:13:17.280044] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:29.995 [2024-12-06 04:13:17.280050] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:29.995 [2024-12-06 04:13:17.280059] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:29.995 [2024-12-06 04:13:17.280064] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:29.995 [2024-12-06 04:13:17.280072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.995 [2024-12-06 04:13:17.280082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:29.995 [2024-12-06 04:13:17.280089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:22:29.995 [2024-12-06 04:13:17.280094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.995 [2024-12-06 04:13:17.280162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.995 [2024-12-06 04:13:17.280168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:29.995 [2024-12-06 04:13:17.280175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:29.995 [2024-12-06 04:13:17.280180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.995 [2024-12-06 04:13:17.280258] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:29.995 [2024-12-06 04:13:17.280265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:29.995 [2024-12-06 04:13:17.280272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.995 [2024-12-06 04:13:17.280278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.995 [2024-12-06 04:13:17.280285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:29.995 [2024-12-06 04:13:17.280290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:29.995 [2024-12-06 04:13:17.280296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:29.995 [2024-12-06 04:13:17.280301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:29.995 [2024-12-06 04:13:17.280309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:29.995 [2024-12-06 04:13:17.280314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.995 [2024-12-06 04:13:17.280320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:29.995 [2024-12-06 04:13:17.280326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:29.995 [2024-12-06 04:13:17.280332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.995 [2024-12-06 04:13:17.280337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:29.995 [2024-12-06 04:13:17.280344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:29.995 [2024-12-06 04:13:17.280349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.995 [2024-12-06 04:13:17.280356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:29.995 [2024-12-06 04:13:17.280362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:29.995 [2024-12-06 04:13:17.280369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.995 [2024-12-06 04:13:17.280374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:29.995 [2024-12-06 04:13:17.280380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:29.995 [2024-12-06 04:13:17.280385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.995 [2024-12-06 04:13:17.280391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:29.996 [2024-12-06 04:13:17.280396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:29.996 [2024-12-06 04:13:17.280402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.996 [2024-12-06 04:13:17.280407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:29.996 [2024-12-06 04:13:17.280414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:29.996 [2024-12-06 04:13:17.280419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.996 [2024-12-06 04:13:17.280425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:29.996 [2024-12-06 04:13:17.280430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:29.996 [2024-12-06 04:13:17.280436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.996 [2024-12-06 04:13:17.280441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:29.996 [2024-12-06 04:13:17.280448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:29.996 [2024-12-06 04:13:17.280453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.996 [2024-12-06 04:13:17.280460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:29.996 [2024-12-06 04:13:17.280465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:29.996 [2024-12-06 04:13:17.280470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.996 [2024-12-06 04:13:17.280475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:29.996 [2024-12-06 04:13:17.280481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:29.996 [2024-12-06 04:13:17.280486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.996 [2024-12-06 04:13:17.280492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:29.996 [2024-12-06 04:13:17.280497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:29.996 [2024-12-06 04:13:17.280503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.996 [2024-12-06 04:13:17.280508] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:29.996 [2024-12-06 04:13:17.280515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:29.996 [2024-12-06 04:13:17.280521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.996 [2024-12-06 04:13:17.280527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.996 [2024-12-06 04:13:17.280534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:29.996 [2024-12-06 04:13:17.280542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:29.996 [2024-12-06 04:13:17.280547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:29.996 [2024-12-06 04:13:17.280554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:29.996 [2024-12-06 04:13:17.280559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:29.996 [2024-12-06 04:13:17.280565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:29.996 [2024-12-06 04:13:17.280571] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:29.996 [2024-12-06 04:13:17.280581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.996 [2024-12-06 04:13:17.280587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:29.996 [2024-12-06 04:13:17.280594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:29.996 [2024-12-06 04:13:17.280599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:29.996 [2024-12-06 04:13:17.280606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:29.996 [2024-12-06 04:13:17.280612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:29.996 [2024-12-06 04:13:17.280619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:29.996 [2024-12-06 04:13:17.280624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:29.996 [2024-12-06 04:13:17.280631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:29.996 [2024-12-06 04:13:17.280636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:29.996 [2024-12-06 04:13:17.280644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:29.996 [2024-12-06 04:13:17.280649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:29.996 [2024-12-06 04:13:17.280656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:29.996 [2024-12-06 04:13:17.280661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:29.996 [2024-12-06 04:13:17.280668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:29.996 [2024-12-06 04:13:17.280673] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:29.996 [2024-12-06 04:13:17.280681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.996 [2024-12-06 04:13:17.280687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:29.996 [2024-12-06 04:13:17.280693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:29.996 [2024-12-06 04:13:17.280699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:29.996 [2024-12-06 04:13:17.280706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:29.996 [2024-12-06 04:13:17.280712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.996 [2024-12-06 04:13:17.280735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:29.996 [2024-12-06 04:13:17.280742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:22:29.996 [2024-12-06 04:13:17.280749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.996 [2024-12-06 04:13:17.280786] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:29.996 [2024-12-06 04:13:17.280796] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:32.032 [2024-12-06 04:13:19.516192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.032 [2024-12-06 04:13:19.516253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:32.032 [2024-12-06 04:13:19.516268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2235.397 ms 00:22:32.032 [2024-12-06 04:13:19.516279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.032 [2024-12-06 04:13:19.541211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.032 [2024-12-06 04:13:19.541258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:32.032 [2024-12-06 04:13:19.541271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.731 ms 00:22:32.032 [2024-12-06 04:13:19.541280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.032 [2024-12-06 04:13:19.541398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.032 [2024-12-06 04:13:19.541412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:32.032 [2024-12-06 04:13:19.541420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:32.032 [2024-12-06 04:13:19.541433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.571586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.571622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:32.292 [2024-12-06 04:13:19.571633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.118 ms 00:22:32.292 [2024-12-06 04:13:19.571642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.571667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.571680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:32.292 [2024-12-06 04:13:19.571689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:32.292 [2024-12-06 04:13:19.571703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.572059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.572083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:32.292 [2024-12-06 04:13:19.572092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:22:32.292 [2024-12-06 04:13:19.572101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.572203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.572213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:32.292 [2024-12-06 04:13:19.572223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:32.292 [2024-12-06 04:13:19.572234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.585941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.585977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:32.292 [2024-12-06 04:13:19.585987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.691 ms 00:22:32.292 [2024-12-06 04:13:19.585997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.610137] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:32.292 [2024-12-06 04:13:19.612783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.612924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:32.292 [2024-12-06 04:13:19.612945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.718 ms 00:22:32.292 [2024-12-06 04:13:19.612954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.673777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.673824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:32.292 [2024-12-06 04:13:19.673840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.785 ms 00:22:32.292 [2024-12-06 04:13:19.673848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.674022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.674035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:32.292 [2024-12-06 04:13:19.674047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:22:32.292 [2024-12-06 04:13:19.674054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.696844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.696879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:32.292 [2024-12-06 04:13:19.696892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.744 ms 00:22:32.292 [2024-12-06 04:13:19.696899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.719292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.719322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:32.292 [2024-12-06 04:13:19.719335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.353 ms 00:22:32.292 [2024-12-06 04:13:19.719342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.719920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.719976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:32.292 [2024-12-06 04:13:19.719990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:22:32.292 [2024-12-06 04:13:19.720001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.785664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.785709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:32.292 [2024-12-06 04:13:19.785738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.615 ms 00:22:32.292 [2024-12-06 04:13:19.785746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.292 [2024-12-06 04:13:19.809603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.292 [2024-12-06 04:13:19.809637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:32.292 [2024-12-06 04:13:19.809651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.789 ms 00:22:32.292 [2024-12-06 04:13:19.809658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.551 [2024-12-06 04:13:19.832130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.551 [2024-12-06 04:13:19.832161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:32.551 [2024-12-06 04:13:19.832173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.434 ms 00:22:32.551 [2024-12-06 04:13:19.832180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.551 [2024-12-06 04:13:19.855092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.551 [2024-12-06 04:13:19.855227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:32.551 [2024-12-06 04:13:19.855247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.875 ms 00:22:32.551 [2024-12-06 04:13:19.855254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.551 [2024-12-06 04:13:19.855289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.551 [2024-12-06 04:13:19.855298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:32.551 [2024-12-06 04:13:19.855310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:32.551 [2024-12-06 04:13:19.855318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.551 [2024-12-06 04:13:19.855390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.551 [2024-12-06 04:13:19.855401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:32.551 [2024-12-06 04:13:19.855411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:32.551 [2024-12-06 04:13:19.855418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.551 [2024-12-06 04:13:19.856306] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2585.601 ms, result 0 00:22:32.551 { 00:22:32.551 "name": "ftl0", 00:22:32.551 "uuid": "3668eabb-1b54-40a7-857e-301a1d6d2e94" 00:22:32.551 } 00:22:32.551 04:13:19 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:32.551 04:13:19 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:32.551 04:13:20 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:32.551 04:13:20 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:32.811 [2024-12-06 04:13:20.183841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.811 [2024-12-06 04:13:20.183892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:32.811 [2024-12-06 04:13:20.183905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:32.811 [2024-12-06 04:13:20.183915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.811 [2024-12-06 04:13:20.183938] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:32.811 [2024-12-06 04:13:20.186582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.811 [2024-12-06 04:13:20.186735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:32.811 [2024-12-06 04:13:20.186756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.625 ms 00:22:32.811 [2024-12-06 04:13:20.186764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.811 [2024-12-06 04:13:20.187036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.811 [2024-12-06 04:13:20.187048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:32.811 [2024-12-06 04:13:20.187059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:22:32.811 [2024-12-06 04:13:20.187066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.811 [2024-12-06 04:13:20.190288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.811 [2024-12-06 04:13:20.190378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:32.811 [2024-12-06 04:13:20.190393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.205 ms 00:22:32.811 [2024-12-06 04:13:20.190401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.811 [2024-12-06 04:13:20.196623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.811 [2024-12-06 04:13:20.196651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:32.812 [2024-12-06 04:13:20.196665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.202 ms 00:22:32.812 [2024-12-06 04:13:20.196673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.812 [2024-12-06 04:13:20.219549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.812 [2024-12-06 04:13:20.219580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:32.812 [2024-12-06 04:13:20.219594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.805 ms 00:22:32.812 [2024-12-06 04:13:20.219601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.812 [2024-12-06 04:13:20.234028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.812 [2024-12-06 04:13:20.234062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:32.812 [2024-12-06 04:13:20.234076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.389 ms 00:22:32.812 [2024-12-06 04:13:20.234083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.812 [2024-12-06 04:13:20.234229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.812 [2024-12-06 04:13:20.234240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:32.812 [2024-12-06 04:13:20.234250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:22:32.812 [2024-12-06 04:13:20.234258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.812 [2024-12-06 04:13:20.256805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.812 [2024-12-06 04:13:20.256836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:32.812 [2024-12-06 04:13:20.256848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.526 ms 00:22:32.812 [2024-12-06 04:13:20.256856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.812 [2024-12-06 04:13:20.279126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.812 [2024-12-06 04:13:20.279154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:32.812 [2024-12-06 04:13:20.279165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.232 ms 00:22:32.812 [2024-12-06 04:13:20.279173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.812 [2024-12-06 04:13:20.302050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.812 [2024-12-06 04:13:20.302167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:32.812 [2024-12-06 04:13:20.302185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.839 ms 00:22:32.812 [2024-12-06 04:13:20.302193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.812 [2024-12-06 04:13:20.323954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.812 [2024-12-06 04:13:20.323982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:32.812 [2024-12-06 04:13:20.323994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.693 ms 00:22:32.812 [2024-12-06 04:13:20.324002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.812 [2024-12-06 04:13:20.324036] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:32.812 [2024-12-06 04:13:20.324049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:32.812 [2024-12-06 04:13:20.324402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.324993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:32.813 [2024-12-06 04:13:20.325008] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:32.813 [2024-12-06 04:13:20.325017] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3668eabb-1b54-40a7-857e-301a1d6d2e94 00:22:32.813 [2024-12-06 04:13:20.325025] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:32.813 [2024-12-06 04:13:20.325035] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:32.813 [2024-12-06 04:13:20.325044] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:32.813 [2024-12-06 04:13:20.325053] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:32.813 [2024-12-06 04:13:20.325060] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:32.813 [2024-12-06 04:13:20.325068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:32.813 [2024-12-06 04:13:20.325075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:32.813 [2024-12-06 04:13:20.325083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:32.813 [2024-12-06 04:13:20.325089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:32.813 [2024-12-06 04:13:20.325097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.813 [2024-12-06 04:13:20.325104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:32.813 [2024-12-06 04:13:20.325113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:22:32.813 [2024-12-06 04:13:20.325122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.073 [2024-12-06 04:13:20.337502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.073 [2024-12-06 04:13:20.337615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:33.073 [2024-12-06 04:13:20.337633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.338 ms 00:22:33.073 [2024-12-06 04:13:20.337640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.073 [2024-12-06 04:13:20.337997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.073 [2024-12-06 04:13:20.338006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:33.073 [2024-12-06 04:13:20.338019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:22:33.073 [2024-12-06 04:13:20.338026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.073 [2024-12-06 04:13:20.379606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.073 [2024-12-06 04:13:20.379639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:33.073 [2024-12-06 04:13:20.379651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.073 [2024-12-06 04:13:20.379659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.073 [2024-12-06 04:13:20.379728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.073 [2024-12-06 04:13:20.379737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:33.073 [2024-12-06 04:13:20.379749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.073 [2024-12-06 04:13:20.379756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.073 [2024-12-06 04:13:20.379821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.073 [2024-12-06 04:13:20.379831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:33.073 [2024-12-06 04:13:20.379840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.073 [2024-12-06 04:13:20.379847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.073 [2024-12-06 04:13:20.379868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.073 [2024-12-06 04:13:20.379875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:33.073 [2024-12-06 04:13:20.379883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.073 [2024-12-06 04:13:20.379892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.073 [2024-12-06 04:13:20.457156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.073 [2024-12-06 04:13:20.457194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:33.073 [2024-12-06 04:13:20.457206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.073 [2024-12-06 04:13:20.457214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.073 [2024-12-06 04:13:20.520402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.073 [2024-12-06 04:13:20.520547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:33.073 [2024-12-06 04:13:20.520566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.073 [2024-12-06 04:13:20.520576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.073 [2024-12-06 04:13:20.520654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.073 [2024-12-06 04:13:20.520664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:33.073 [2024-12-06 04:13:20.520674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.073 [2024-12-06 04:13:20.520681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.073 [2024-12-06 04:13:20.520752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.073 [2024-12-06 04:13:20.520762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:33.073 [2024-12-06 04:13:20.520772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.073 [2024-12-06 04:13:20.520779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.073 [2024-12-06 04:13:20.520872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.073 [2024-12-06 04:13:20.520881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:33.073 [2024-12-06 04:13:20.520890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.073 [2024-12-06 04:13:20.520898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.073 [2024-12-06 04:13:20.520934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.073 [2024-12-06 04:13:20.520942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:33.073 [2024-12-06 04:13:20.520952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.073 [2024-12-06 04:13:20.520959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.074 [2024-12-06 04:13:20.520996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.074 [2024-12-06 04:13:20.521004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:33.074 [2024-12-06 04:13:20.521013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.074 [2024-12-06 04:13:20.521021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.074 [2024-12-06 04:13:20.521063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.074 [2024-12-06 04:13:20.521072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:33.074 [2024-12-06 04:13:20.521082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.074 [2024-12-06 04:13:20.521089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.074 [2024-12-06 04:13:20.521211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.342 ms, result 0 00:22:33.074 true 00:22:33.074 04:13:20 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77247 00:22:33.074 04:13:20 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77247 ']' 00:22:33.074 04:13:20 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77247 00:22:33.074 04:13:20 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:22:33.074 04:13:20 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.074 04:13:20 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77247 00:22:33.074 04:13:20 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:33.074 04:13:20 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:33.074 04:13:20 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77247' 00:22:33.074 killing process with pid 77247 00:22:33.074 04:13:20 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77247 00:22:33.074 04:13:20 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77247 00:22:39.630 04:13:26 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:43.807 262144+0 records in 00:22:43.807 262144+0 records out 00:22:43.807 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.9408 s, 272 MB/s 00:22:43.807 04:13:30 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:44.754 04:13:32 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:44.754 [2024-12-06 04:13:32.261854] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:22:44.754 [2024-12-06 04:13:32.261942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77451 ] 00:22:45.016 [2024-12-06 04:13:32.416993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.016 [2024-12-06 04:13:32.509367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.277 [2024-12-06 04:13:32.763691] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:45.277 [2024-12-06 04:13:32.763764] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:45.537 [2024-12-06 04:13:32.916616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.537 [2024-12-06 04:13:32.916663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:45.537 [2024-12-06 04:13:32.916675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:45.537 [2024-12-06 04:13:32.916683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.537 [2024-12-06 04:13:32.916741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.537 [2024-12-06 04:13:32.916754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:45.537 [2024-12-06 04:13:32.916762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:45.537 [2024-12-06 04:13:32.916769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.537 [2024-12-06 04:13:32.916785] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:45.537 [2024-12-06 04:13:32.917489] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:45.537 [2024-12-06 04:13:32.917509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.537 [2024-12-06 04:13:32.917516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:45.537 [2024-12-06 04:13:32.917525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:22:45.537 [2024-12-06 04:13:32.917532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.537 [2024-12-06 04:13:32.918553] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:45.537 [2024-12-06 04:13:32.930670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.537 [2024-12-06 04:13:32.930702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:45.537 [2024-12-06 04:13:32.930713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.118 ms 00:22:45.537 [2024-12-06 04:13:32.930733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.537 [2024-12-06 04:13:32.930786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.537 [2024-12-06 04:13:32.930795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:45.537 [2024-12-06 04:13:32.930803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:45.537 [2024-12-06 04:13:32.930810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.537 [2024-12-06 04:13:32.935411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.537 [2024-12-06 04:13:32.935441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:45.537 [2024-12-06 04:13:32.935450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.544 ms 00:22:45.537 [2024-12-06 04:13:32.935462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.537 [2024-12-06 04:13:32.935526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.537 [2024-12-06 04:13:32.935535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:45.537 [2024-12-06 04:13:32.935542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:45.537 [2024-12-06 04:13:32.935549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.537 [2024-12-06 04:13:32.935588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.537 [2024-12-06 04:13:32.935598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:45.537 [2024-12-06 04:13:32.935606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:45.537 [2024-12-06 04:13:32.935612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.537 [2024-12-06 04:13:32.935634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:45.537 [2024-12-06 04:13:32.938947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.537 [2024-12-06 04:13:32.938973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:45.537 [2024-12-06 04:13:32.938984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.317 ms 00:22:45.537 [2024-12-06 04:13:32.938991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.537 [2024-12-06 04:13:32.939019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.537 [2024-12-06 04:13:32.939027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:45.537 [2024-12-06 04:13:32.939035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:45.537 [2024-12-06 04:13:32.939042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.537 [2024-12-06 04:13:32.939059] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:45.537 [2024-12-06 04:13:32.939077] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:45.537 [2024-12-06 04:13:32.939109] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:45.538 [2024-12-06 04:13:32.939126] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:45.538 [2024-12-06 04:13:32.939226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:45.538 [2024-12-06 04:13:32.939236] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:45.538 [2024-12-06 04:13:32.939246] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:45.538 [2024-12-06 04:13:32.939255] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:45.538 [2024-12-06 04:13:32.939264] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:45.538 [2024-12-06 04:13:32.939271] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:45.538 [2024-12-06 04:13:32.939278] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:45.538 [2024-12-06 04:13:32.939288] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:45.538 [2024-12-06 04:13:32.939295] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:45.538 [2024-12-06 04:13:32.939302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.538 [2024-12-06 04:13:32.939309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:45.538 [2024-12-06 04:13:32.939317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:22:45.538 [2024-12-06 04:13:32.939323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.538 [2024-12-06 04:13:32.939405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.538 [2024-12-06 04:13:32.939412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:45.538 [2024-12-06 04:13:32.939420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:45.538 [2024-12-06 04:13:32.939427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.538 [2024-12-06 04:13:32.939528] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:45.538 [2024-12-06 04:13:32.939537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:45.538 [2024-12-06 04:13:32.939546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:45.538 [2024-12-06 04:13:32.939554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:45.538 [2024-12-06 04:13:32.939568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:45.538 [2024-12-06 04:13:32.939583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:45.538 [2024-12-06 04:13:32.939589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:45.538 [2024-12-06 04:13:32.939602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:45.538 [2024-12-06 04:13:32.939608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:45.538 [2024-12-06 04:13:32.939615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:45.538 [2024-12-06 04:13:32.939627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:45.538 [2024-12-06 04:13:32.939633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:45.538 [2024-12-06 04:13:32.939639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:45.538 [2024-12-06 04:13:32.939654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:45.538 [2024-12-06 04:13:32.939660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:45.538 [2024-12-06 04:13:32.939673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.538 [2024-12-06 04:13:32.939686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:45.538 [2024-12-06 04:13:32.939692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.538 [2024-12-06 04:13:32.939705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:45.538 [2024-12-06 04:13:32.939711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.538 [2024-12-06 04:13:32.939744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:45.538 [2024-12-06 04:13:32.939750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.538 [2024-12-06 04:13:32.939763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:45.538 [2024-12-06 04:13:32.939770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:45.538 [2024-12-06 04:13:32.939782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:45.538 [2024-12-06 04:13:32.939788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:45.538 [2024-12-06 04:13:32.939795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:45.538 [2024-12-06 04:13:32.939801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:45.538 [2024-12-06 04:13:32.939808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:45.538 [2024-12-06 04:13:32.939814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:45.538 [2024-12-06 04:13:32.939827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:45.538 [2024-12-06 04:13:32.939835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939841] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:45.538 [2024-12-06 04:13:32.939849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:45.538 [2024-12-06 04:13:32.939855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:45.538 [2024-12-06 04:13:32.939862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.538 [2024-12-06 04:13:32.939869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:45.538 [2024-12-06 04:13:32.939877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:45.538 [2024-12-06 04:13:32.939884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:45.538 [2024-12-06 04:13:32.939891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:45.538 [2024-12-06 04:13:32.939897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:45.538 [2024-12-06 04:13:32.939903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:45.538 [2024-12-06 04:13:32.939911] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:45.538 [2024-12-06 04:13:32.939920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:45.538 [2024-12-06 04:13:32.939931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:45.538 [2024-12-06 04:13:32.939938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:45.538 [2024-12-06 04:13:32.939945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:45.538 [2024-12-06 04:13:32.939952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:45.538 [2024-12-06 04:13:32.939959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:45.538 [2024-12-06 04:13:32.939966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:45.538 [2024-12-06 04:13:32.939972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:45.538 [2024-12-06 04:13:32.939979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:45.538 [2024-12-06 04:13:32.939985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:45.538 [2024-12-06 04:13:32.939993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:45.538 [2024-12-06 04:13:32.940000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:45.538 [2024-12-06 04:13:32.940006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:45.538 [2024-12-06 04:13:32.940013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:45.538 [2024-12-06 04:13:32.940020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:45.538 [2024-12-06 04:13:32.940027] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:45.538 [2024-12-06 04:13:32.940035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:45.538 [2024-12-06 04:13:32.940042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:45.538 [2024-12-06 04:13:32.940049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:45.538 [2024-12-06 04:13:32.940056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:45.538 [2024-12-06 04:13:32.940063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:45.538 [2024-12-06 04:13:32.940071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.538 [2024-12-06 04:13:32.940078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:45.538 [2024-12-06 04:13:32.940085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:22:45.538 [2024-12-06 04:13:32.940092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.538 [2024-12-06 04:13:32.965226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.539 [2024-12-06 04:13:32.965259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:45.539 [2024-12-06 04:13:32.965269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.081 ms 00:22:45.539 [2024-12-06 04:13:32.965279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.539 [2024-12-06 04:13:32.965359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.539 [2024-12-06 04:13:32.965367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:45.539 [2024-12-06 04:13:32.965375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:45.539 [2024-12-06 04:13:32.965381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.539 [2024-12-06 04:13:33.013352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.539 [2024-12-06 04:13:33.013493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.539 [2024-12-06 04:13:33.013511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.925 ms 00:22:45.539 [2024-12-06 04:13:33.013520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.539 [2024-12-06 04:13:33.013556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.539 [2024-12-06 04:13:33.013567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:45.539 [2024-12-06 04:13:33.013579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:45.539 [2024-12-06 04:13:33.013586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.539 [2024-12-06 04:13:33.013953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.539 [2024-12-06 04:13:33.013970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:45.539 [2024-12-06 04:13:33.013979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:22:45.539 [2024-12-06 04:13:33.013987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.539 [2024-12-06 04:13:33.014108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.539 [2024-12-06 04:13:33.014117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:45.539 [2024-12-06 04:13:33.014127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:22:45.539 [2024-12-06 04:13:33.014134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.539 [2024-12-06 04:13:33.026920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.539 [2024-12-06 04:13:33.026952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:45.539 [2024-12-06 04:13:33.026962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.766 ms 00:22:45.539 [2024-12-06 04:13:33.026969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.539 [2024-12-06 04:13:33.039333] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:45.539 [2024-12-06 04:13:33.039367] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:45.539 [2024-12-06 04:13:33.039378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.539 [2024-12-06 04:13:33.039387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:45.539 [2024-12-06 04:13:33.039395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.322 ms 00:22:45.539 [2024-12-06 04:13:33.039402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.063424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.798 [2024-12-06 04:13:33.063556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:45.798 [2024-12-06 04:13:33.063572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.986 ms 00:22:45.798 [2024-12-06 04:13:33.063579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.074724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.798 [2024-12-06 04:13:33.074754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:45.798 [2024-12-06 04:13:33.074764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.110 ms 00:22:45.798 [2024-12-06 04:13:33.074771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.085826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.798 [2024-12-06 04:13:33.085945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:45.798 [2024-12-06 04:13:33.085961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.025 ms 00:22:45.798 [2024-12-06 04:13:33.085967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.086558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.798 [2024-12-06 04:13:33.086578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:45.798 [2024-12-06 04:13:33.086587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:22:45.798 [2024-12-06 04:13:33.086596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.141045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.798 [2024-12-06 04:13:33.141205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:45.798 [2024-12-06 04:13:33.141221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.434 ms 00:22:45.798 [2024-12-06 04:13:33.141234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.151270] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:45.798 [2024-12-06 04:13:33.153284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.798 [2024-12-06 04:13:33.153312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:45.798 [2024-12-06 04:13:33.153323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.019 ms 00:22:45.798 [2024-12-06 04:13:33.153331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.153408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.798 [2024-12-06 04:13:33.153419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:45.798 [2024-12-06 04:13:33.153429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:45.798 [2024-12-06 04:13:33.153438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.153501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.798 [2024-12-06 04:13:33.153511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:45.798 [2024-12-06 04:13:33.153520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:45.798 [2024-12-06 04:13:33.153529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.153548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.798 [2024-12-06 04:13:33.153557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:45.798 [2024-12-06 04:13:33.153565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:45.798 [2024-12-06 04:13:33.153573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.153602] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:45.798 [2024-12-06 04:13:33.153614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.798 [2024-12-06 04:13:33.153621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:45.798 [2024-12-06 04:13:33.153628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:45.798 [2024-12-06 04:13:33.153635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.176625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.798 [2024-12-06 04:13:33.176756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:45.798 [2024-12-06 04:13:33.176772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.973 ms 00:22:45.798 [2024-12-06 04:13:33.176784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.176846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.798 [2024-12-06 04:13:33.176855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:45.798 [2024-12-06 04:13:33.176863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:45.798 [2024-12-06 04:13:33.176870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.798 [2024-12-06 04:13:33.177704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.692 ms, result 0 00:22:46.731  [2024-12-06T04:13:35.193Z] Copying: 45/1024 [MB] (45 MBps) [2024-12-06T04:13:36.564Z] Copying: 94/1024 [MB] (48 MBps) [2024-12-06T04:13:37.498Z] Copying: 139/1024 [MB] (45 MBps) [2024-12-06T04:13:38.433Z] Copying: 188/1024 [MB] (48 MBps) [2024-12-06T04:13:39.366Z] Copying: 233/1024 [MB] (45 MBps) [2024-12-06T04:13:40.299Z] Copying: 279/1024 [MB] (45 MBps) [2024-12-06T04:13:41.231Z] Copying: 324/1024 [MB] (45 MBps) [2024-12-06T04:13:42.603Z] Copying: 371/1024 [MB] (46 MBps) [2024-12-06T04:13:43.536Z] Copying: 418/1024 [MB] (46 MBps) [2024-12-06T04:13:44.470Z] Copying: 466/1024 [MB] (48 MBps) [2024-12-06T04:13:45.404Z] Copying: 511/1024 [MB] (45 MBps) [2024-12-06T04:13:46.335Z] Copying: 557/1024 [MB] (45 MBps) [2024-12-06T04:13:47.267Z] Copying: 602/1024 [MB] (45 MBps) [2024-12-06T04:13:48.200Z] Copying: 647/1024 [MB] (44 MBps) [2024-12-06T04:13:49.572Z] Copying: 692/1024 [MB] (45 MBps) [2024-12-06T04:13:50.504Z] Copying: 738/1024 [MB] (45 MBps) [2024-12-06T04:13:51.436Z] Copying: 789/1024 [MB] (50 MBps) [2024-12-06T04:13:52.369Z] Copying: 835/1024 [MB] (46 MBps) [2024-12-06T04:13:53.304Z] Copying: 880/1024 [MB] (44 MBps) [2024-12-06T04:13:54.287Z] Copying: 925/1024 [MB] (45 MBps) [2024-12-06T04:13:55.241Z] Copying: 971/1024 [MB] (45 MBps) [2024-12-06T04:13:55.500Z] Copying: 1019/1024 [MB] (48 MBps) [2024-12-06T04:13:55.500Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-12-06 04:13:55.283867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.973 [2024-12-06 04:13:55.283908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:07.973 [2024-12-06 04:13:55.283921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:07.973 [2024-12-06 04:13:55.283929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.973 [2024-12-06 04:13:55.283949] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:07.973 [2024-12-06 04:13:55.286516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.973 [2024-12-06 04:13:55.286540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:07.973 [2024-12-06 04:13:55.286554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.554 ms 00:23:07.973 [2024-12-06 04:13:55.286562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.973 [2024-12-06 04:13:55.287966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.973 [2024-12-06 04:13:55.287994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:07.973 [2024-12-06 04:13:55.288003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:23:07.974 [2024-12-06 04:13:55.288011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.974 [2024-12-06 04:13:55.300416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.974 [2024-12-06 04:13:55.300445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:07.974 [2024-12-06 04:13:55.300455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.390 ms 00:23:07.974 [2024-12-06 04:13:55.300462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.974 [2024-12-06 04:13:55.306620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.974 [2024-12-06 04:13:55.306644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:07.974 [2024-12-06 04:13:55.306652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.125 ms 00:23:07.974 [2024-12-06 04:13:55.306659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.974 [2024-12-06 04:13:55.329863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.974 [2024-12-06 04:13:55.329891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:07.974 [2024-12-06 04:13:55.329901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.161 ms 00:23:07.974 [2024-12-06 04:13:55.329909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.974 [2024-12-06 04:13:55.344124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.974 [2024-12-06 04:13:55.344150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:07.974 [2024-12-06 04:13:55.344161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.185 ms 00:23:07.974 [2024-12-06 04:13:55.344169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.974 [2024-12-06 04:13:55.344307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.974 [2024-12-06 04:13:55.344324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:07.974 [2024-12-06 04:13:55.344332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:23:07.974 [2024-12-06 04:13:55.344339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.974 [2024-12-06 04:13:55.366862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.974 [2024-12-06 04:13:55.366889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:07.974 [2024-12-06 04:13:55.366899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.510 ms 00:23:07.974 [2024-12-06 04:13:55.366907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.974 [2024-12-06 04:13:55.389090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.974 [2024-12-06 04:13:55.389116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:07.974 [2024-12-06 04:13:55.389125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.153 ms 00:23:07.974 [2024-12-06 04:13:55.389132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.974 [2024-12-06 04:13:55.411011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.974 [2024-12-06 04:13:55.411036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:07.974 [2024-12-06 04:13:55.411045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.848 ms 00:23:07.974 [2024-12-06 04:13:55.411052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.974 [2024-12-06 04:13:55.433034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.974 [2024-12-06 04:13:55.433060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:07.974 [2024-12-06 04:13:55.433069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.932 ms 00:23:07.974 [2024-12-06 04:13:55.433076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.974 [2024-12-06 04:13:55.433106] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:07.974 [2024-12-06 04:13:55.433120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:07.974 [2024-12-06 04:13:55.433554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:07.975 [2024-12-06 04:13:55.433894] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:07.975 [2024-12-06 04:13:55.433905] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3668eabb-1b54-40a7-857e-301a1d6d2e94 00:23:07.975 [2024-12-06 04:13:55.433912] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:07.975 [2024-12-06 04:13:55.433919] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:07.975 [2024-12-06 04:13:55.433925] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:07.975 [2024-12-06 04:13:55.433933] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:07.975 [2024-12-06 04:13:55.433939] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:07.975 [2024-12-06 04:13:55.433952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:07.975 [2024-12-06 04:13:55.433959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:07.975 [2024-12-06 04:13:55.433965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:07.975 [2024-12-06 04:13:55.433972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:07.975 [2024-12-06 04:13:55.433979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.975 [2024-12-06 04:13:55.433986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:07.975 [2024-12-06 04:13:55.433994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:23:07.975 [2024-12-06 04:13:55.434002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.975 [2024-12-06 04:13:55.446108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.975 [2024-12-06 04:13:55.446134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:07.975 [2024-12-06 04:13:55.446144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.089 ms 00:23:07.975 [2024-12-06 04:13:55.446153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.975 [2024-12-06 04:13:55.446490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.975 [2024-12-06 04:13:55.446503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:07.975 [2024-12-06 04:13:55.446511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:23:07.975 [2024-12-06 04:13:55.446522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.975 [2024-12-06 04:13:55.478848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.975 [2024-12-06 04:13:55.478876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:07.975 [2024-12-06 04:13:55.478886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.975 [2024-12-06 04:13:55.478894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.975 [2024-12-06 04:13:55.478945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.975 [2024-12-06 04:13:55.478953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:07.975 [2024-12-06 04:13:55.478961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.975 [2024-12-06 04:13:55.478971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.975 [2024-12-06 04:13:55.479020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.975 [2024-12-06 04:13:55.479029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:07.975 [2024-12-06 04:13:55.479036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.975 [2024-12-06 04:13:55.479044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.975 [2024-12-06 04:13:55.479058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.975 [2024-12-06 04:13:55.479065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:07.975 [2024-12-06 04:13:55.479073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.975 [2024-12-06 04:13:55.479079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.233 [2024-12-06 04:13:55.554612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.233 [2024-12-06 04:13:55.554646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:08.233 [2024-12-06 04:13:55.554657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.233 [2024-12-06 04:13:55.554665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.233 [2024-12-06 04:13:55.616318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.233 [2024-12-06 04:13:55.616352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:08.233 [2024-12-06 04:13:55.616361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.233 [2024-12-06 04:13:55.616373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.233 [2024-12-06 04:13:55.616430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.233 [2024-12-06 04:13:55.616440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:08.233 [2024-12-06 04:13:55.616448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.233 [2024-12-06 04:13:55.616455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.233 [2024-12-06 04:13:55.616487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.233 [2024-12-06 04:13:55.616496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:08.233 [2024-12-06 04:13:55.616504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.233 [2024-12-06 04:13:55.616512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.233 [2024-12-06 04:13:55.616597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.233 [2024-12-06 04:13:55.616606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:08.233 [2024-12-06 04:13:55.616614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.234 [2024-12-06 04:13:55.616621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.234 [2024-12-06 04:13:55.616649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.234 [2024-12-06 04:13:55.616657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:08.234 [2024-12-06 04:13:55.616664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.234 [2024-12-06 04:13:55.616671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.234 [2024-12-06 04:13:55.616703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.234 [2024-12-06 04:13:55.616734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:08.234 [2024-12-06 04:13:55.616743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.234 [2024-12-06 04:13:55.616750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.234 [2024-12-06 04:13:55.616787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.234 [2024-12-06 04:13:55.616796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:08.234 [2024-12-06 04:13:55.616804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.234 [2024-12-06 04:13:55.616811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.234 [2024-12-06 04:13:55.616918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.023 ms, result 0 00:23:10.135 00:23:10.135 00:23:10.135 04:13:57 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:10.135 [2024-12-06 04:13:57.394963] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:23:10.135 [2024-12-06 04:13:57.395078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77712 ] 00:23:10.135 [2024-12-06 04:13:57.550013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.135 [2024-12-06 04:13:57.624935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.393 [2024-12-06 04:13:57.834285] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:10.393 [2024-12-06 04:13:57.834338] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:10.651 [2024-12-06 04:13:57.981490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.651 [2024-12-06 04:13:57.981531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:10.651 [2024-12-06 04:13:57.981541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:10.651 [2024-12-06 04:13:57.981547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.651 [2024-12-06 04:13:57.981584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.651 [2024-12-06 04:13:57.981593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:10.651 [2024-12-06 04:13:57.981600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:10.651 [2024-12-06 04:13:57.981605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.651 [2024-12-06 04:13:57.981617] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:10.651 [2024-12-06 04:13:57.982142] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:10.651 [2024-12-06 04:13:57.982164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.651 [2024-12-06 04:13:57.982170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:10.651 [2024-12-06 04:13:57.982177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:23:10.651 [2024-12-06 04:13:57.982184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.652 [2024-12-06 04:13:57.983174] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:10.652 [2024-12-06 04:13:57.992744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.652 [2024-12-06 04:13:57.992773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:10.652 [2024-12-06 04:13:57.992781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.571 ms 00:23:10.652 [2024-12-06 04:13:57.992788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.652 [2024-12-06 04:13:57.992833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.652 [2024-12-06 04:13:57.992840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:10.652 [2024-12-06 04:13:57.992847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:10.652 [2024-12-06 04:13:57.992852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.652 [2024-12-06 04:13:57.997326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.652 [2024-12-06 04:13:57.997355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:10.652 [2024-12-06 04:13:57.997362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.436 ms 00:23:10.652 [2024-12-06 04:13:57.997371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.652 [2024-12-06 04:13:57.997425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.652 [2024-12-06 04:13:57.997432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:10.652 [2024-12-06 04:13:57.997438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:10.652 [2024-12-06 04:13:57.997444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.652 [2024-12-06 04:13:57.997483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.652 [2024-12-06 04:13:57.997491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:10.652 [2024-12-06 04:13:57.997497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:10.652 [2024-12-06 04:13:57.997503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.652 [2024-12-06 04:13:57.997519] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:10.652 [2024-12-06 04:13:58.000110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.652 [2024-12-06 04:13:58.000136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:10.652 [2024-12-06 04:13:58.000145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:23:10.652 [2024-12-06 04:13:58.000151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.652 [2024-12-06 04:13:58.000179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.652 [2024-12-06 04:13:58.000186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:10.652 [2024-12-06 04:13:58.000192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:10.652 [2024-12-06 04:13:58.000198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.652 [2024-12-06 04:13:58.000212] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:10.652 [2024-12-06 04:13:58.000228] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:10.652 [2024-12-06 04:13:58.000254] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:10.652 [2024-12-06 04:13:58.000267] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:10.652 [2024-12-06 04:13:58.000346] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:10.652 [2024-12-06 04:13:58.000354] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:10.652 [2024-12-06 04:13:58.000362] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:10.652 [2024-12-06 04:13:58.000370] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:10.652 [2024-12-06 04:13:58.000376] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:10.652 [2024-12-06 04:13:58.000382] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:10.652 [2024-12-06 04:13:58.000388] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:10.652 [2024-12-06 04:13:58.000395] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:10.652 [2024-12-06 04:13:58.000401] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:10.652 [2024-12-06 04:13:58.000407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.652 [2024-12-06 04:13:58.000413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:10.652 [2024-12-06 04:13:58.000419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:23:10.652 [2024-12-06 04:13:58.000424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.652 [2024-12-06 04:13:58.000486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.652 [2024-12-06 04:13:58.000493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:10.652 [2024-12-06 04:13:58.000498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:10.652 [2024-12-06 04:13:58.000504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.652 [2024-12-06 04:13:58.000580] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:10.652 [2024-12-06 04:13:58.000595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:10.652 [2024-12-06 04:13:58.000602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:10.652 [2024-12-06 04:13:58.000608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:10.652 [2024-12-06 04:13:58.000619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:10.652 [2024-12-06 04:13:58.000629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:10.652 [2024-12-06 04:13:58.000635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:10.652 [2024-12-06 04:13:58.000646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:10.652 [2024-12-06 04:13:58.000651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:10.652 [2024-12-06 04:13:58.000656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:10.652 [2024-12-06 04:13:58.000666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:10.652 [2024-12-06 04:13:58.000672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:10.652 [2024-12-06 04:13:58.000677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:10.652 [2024-12-06 04:13:58.000687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:10.652 [2024-12-06 04:13:58.000692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:10.652 [2024-12-06 04:13:58.000702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.652 [2024-12-06 04:13:58.000712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:10.652 [2024-12-06 04:13:58.000728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.652 [2024-12-06 04:13:58.000738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:10.652 [2024-12-06 04:13:58.000743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.652 [2024-12-06 04:13:58.000753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:10.652 [2024-12-06 04:13:58.000758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.652 [2024-12-06 04:13:58.000767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:10.652 [2024-12-06 04:13:58.000773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:10.652 [2024-12-06 04:13:58.000782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:10.652 [2024-12-06 04:13:58.000787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:10.652 [2024-12-06 04:13:58.000793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:10.652 [2024-12-06 04:13:58.000798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:10.652 [2024-12-06 04:13:58.000803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:10.652 [2024-12-06 04:13:58.000808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:10.652 [2024-12-06 04:13:58.000817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:10.652 [2024-12-06 04:13:58.000822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000827] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:10.652 [2024-12-06 04:13:58.000833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:10.652 [2024-12-06 04:13:58.000839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:10.652 [2024-12-06 04:13:58.000845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.652 [2024-12-06 04:13:58.000851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:10.652 [2024-12-06 04:13:58.000856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:10.652 [2024-12-06 04:13:58.000862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:10.652 [2024-12-06 04:13:58.000867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:10.652 [2024-12-06 04:13:58.000872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:10.653 [2024-12-06 04:13:58.000877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:10.653 [2024-12-06 04:13:58.000884] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:10.653 [2024-12-06 04:13:58.000890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:10.653 [2024-12-06 04:13:58.000898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:10.653 [2024-12-06 04:13:58.000904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:10.653 [2024-12-06 04:13:58.000909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:10.653 [2024-12-06 04:13:58.000915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:10.653 [2024-12-06 04:13:58.000920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:10.653 [2024-12-06 04:13:58.000926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:10.653 [2024-12-06 04:13:58.000931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:10.653 [2024-12-06 04:13:58.000937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:10.653 [2024-12-06 04:13:58.000942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:10.653 [2024-12-06 04:13:58.000947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:10.653 [2024-12-06 04:13:58.000953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:10.653 [2024-12-06 04:13:58.000958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:10.653 [2024-12-06 04:13:58.000963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:10.653 [2024-12-06 04:13:58.000968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:10.653 [2024-12-06 04:13:58.000974] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:10.653 [2024-12-06 04:13:58.000980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:10.653 [2024-12-06 04:13:58.000986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:10.653 [2024-12-06 04:13:58.000991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:10.653 [2024-12-06 04:13:58.000997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:10.653 [2024-12-06 04:13:58.001002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:10.653 [2024-12-06 04:13:58.001007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.001013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:10.653 [2024-12-06 04:13:58.001019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:23:10.653 [2024-12-06 04:13:58.001025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.021711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.021748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:10.653 [2024-12-06 04:13:58.021756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.653 ms 00:23:10.653 [2024-12-06 04:13:58.021764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.021828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.021834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:10.653 [2024-12-06 04:13:58.021840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:10.653 [2024-12-06 04:13:58.021846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.059310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.059344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:10.653 [2024-12-06 04:13:58.059353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.425 ms 00:23:10.653 [2024-12-06 04:13:58.059359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.059390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.059398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:10.653 [2024-12-06 04:13:58.059407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:10.653 [2024-12-06 04:13:58.059413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.059738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.059758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:10.653 [2024-12-06 04:13:58.059765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:23:10.653 [2024-12-06 04:13:58.059771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.059868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.059876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:10.653 [2024-12-06 04:13:58.059882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:10.653 [2024-12-06 04:13:58.059891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.070280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.070308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:10.653 [2024-12-06 04:13:58.070318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.373 ms 00:23:10.653 [2024-12-06 04:13:58.070324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.079960] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:10.653 [2024-12-06 04:13:58.079988] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:10.653 [2024-12-06 04:13:58.079997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.080004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:10.653 [2024-12-06 04:13:58.080011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.606 ms 00:23:10.653 [2024-12-06 04:13:58.080017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.098354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.098394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:10.653 [2024-12-06 04:13:58.098403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.307 ms 00:23:10.653 [2024-12-06 04:13:58.098409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.107221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.107248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:10.653 [2024-12-06 04:13:58.107255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.777 ms 00:23:10.653 [2024-12-06 04:13:58.107261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.115588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.115615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:10.653 [2024-12-06 04:13:58.115622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.301 ms 00:23:10.653 [2024-12-06 04:13:58.115628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.116083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.116104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:10.653 [2024-12-06 04:13:58.116113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:23:10.653 [2024-12-06 04:13:58.116119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.159095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.159136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:10.653 [2024-12-06 04:13:58.159149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.963 ms 00:23:10.653 [2024-12-06 04:13:58.159157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.166791] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:10.653 [2024-12-06 04:13:58.168452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.168478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:10.653 [2024-12-06 04:13:58.168486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.262 ms 00:23:10.653 [2024-12-06 04:13:58.168492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.168543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.168551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:10.653 [2024-12-06 04:13:58.168561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:10.653 [2024-12-06 04:13:58.168566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.168609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.168621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:10.653 [2024-12-06 04:13:58.168627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:10.653 [2024-12-06 04:13:58.168637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.168651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.168657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:10.653 [2024-12-06 04:13:58.168663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:10.653 [2024-12-06 04:13:58.168668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.653 [2024-12-06 04:13:58.168693] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:10.653 [2024-12-06 04:13:58.168700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.653 [2024-12-06 04:13:58.168706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:10.653 [2024-12-06 04:13:58.168712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:10.653 [2024-12-06 04:13:58.168727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.911 [2024-12-06 04:13:58.186610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.911 [2024-12-06 04:13:58.186645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:10.911 [2024-12-06 04:13:58.186660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.868 ms 00:23:10.911 [2024-12-06 04:13:58.186666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.911 [2024-12-06 04:13:58.186732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.911 [2024-12-06 04:13:58.186741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:10.911 [2024-12-06 04:13:58.186748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:10.911 [2024-12-06 04:13:58.186754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.911 [2024-12-06 04:13:58.187533] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 205.711 ms, result 0 00:23:11.843  [2024-12-06T04:14:00.745Z] Copying: 47/1024 [MB] (47 MBps) [2024-12-06T04:14:01.682Z] Copying: 92/1024 [MB] (45 MBps) [2024-12-06T04:14:02.616Z] Copying: 139/1024 [MB] (46 MBps) [2024-12-06T04:14:03.552Z] Copying: 187/1024 [MB] (48 MBps) [2024-12-06T04:14:04.536Z] Copying: 232/1024 [MB] (44 MBps) [2024-12-06T04:14:05.470Z] Copying: 277/1024 [MB] (45 MBps) [2024-12-06T04:14:06.405Z] Copying: 326/1024 [MB] (49 MBps) [2024-12-06T04:14:07.339Z] Copying: 376/1024 [MB] (49 MBps) [2024-12-06T04:14:08.711Z] Copying: 422/1024 [MB] (46 MBps) [2024-12-06T04:14:09.644Z] Copying: 471/1024 [MB] (48 MBps) [2024-12-06T04:14:10.578Z] Copying: 519/1024 [MB] (48 MBps) [2024-12-06T04:14:11.511Z] Copying: 568/1024 [MB] (48 MBps) [2024-12-06T04:14:12.442Z] Copying: 617/1024 [MB] (49 MBps) [2024-12-06T04:14:13.374Z] Copying: 667/1024 [MB] (49 MBps) [2024-12-06T04:14:14.747Z] Copying: 715/1024 [MB] (48 MBps) [2024-12-06T04:14:15.679Z] Copying: 762/1024 [MB] (46 MBps) [2024-12-06T04:14:16.635Z] Copying: 808/1024 [MB] (45 MBps) [2024-12-06T04:14:17.571Z] Copying: 858/1024 [MB] (50 MBps) [2024-12-06T04:14:18.505Z] Copying: 905/1024 [MB] (46 MBps) [2024-12-06T04:14:19.443Z] Copying: 953/1024 [MB] (47 MBps) [2024-12-06T04:14:20.009Z] Copying: 999/1024 [MB] (46 MBps) [2024-12-06T04:14:20.269Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-12-06 04:14:20.101577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.742 [2024-12-06 04:14:20.101668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:32.742 [2024-12-06 04:14:20.101693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:32.742 [2024-12-06 04:14:20.101709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.742 [2024-12-06 04:14:20.101767] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:32.742 [2024-12-06 04:14:20.106574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.742 [2024-12-06 04:14:20.106634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:32.742 [2024-12-06 04:14:20.106652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.779 ms 00:23:32.742 [2024-12-06 04:14:20.106666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.742 [2024-12-06 04:14:20.108086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.742 [2024-12-06 04:14:20.108123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:32.742 [2024-12-06 04:14:20.108140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:23:32.742 [2024-12-06 04:14:20.108154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.742 [2024-12-06 04:14:20.112814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.742 [2024-12-06 04:14:20.112835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:32.742 [2024-12-06 04:14:20.112845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.636 ms 00:23:32.742 [2024-12-06 04:14:20.112858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.742 [2024-12-06 04:14:20.118989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.742 [2024-12-06 04:14:20.119018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:32.742 [2024-12-06 04:14:20.119027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.116 ms 00:23:32.742 [2024-12-06 04:14:20.119034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.742 [2024-12-06 04:14:20.142597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.742 [2024-12-06 04:14:20.142631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:32.742 [2024-12-06 04:14:20.142643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.509 ms 00:23:32.742 [2024-12-06 04:14:20.142652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.742 [2024-12-06 04:14:20.156276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.742 [2024-12-06 04:14:20.156308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:32.742 [2024-12-06 04:14:20.156320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.605 ms 00:23:32.742 [2024-12-06 04:14:20.156328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.742 [2024-12-06 04:14:20.156474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.742 [2024-12-06 04:14:20.156485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:32.742 [2024-12-06 04:14:20.156494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:23:32.742 [2024-12-06 04:14:20.156501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.742 [2024-12-06 04:14:20.179435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.742 [2024-12-06 04:14:20.179574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:32.742 [2024-12-06 04:14:20.179591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.919 ms 00:23:32.742 [2024-12-06 04:14:20.179598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.742 [2024-12-06 04:14:20.202116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.742 [2024-12-06 04:14:20.202238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:32.742 [2024-12-06 04:14:20.202253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.499 ms 00:23:32.742 [2024-12-06 04:14:20.202260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.742 [2024-12-06 04:14:20.224671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.742 [2024-12-06 04:14:20.224703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:32.742 [2024-12-06 04:14:20.224713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.392 ms 00:23:32.742 [2024-12-06 04:14:20.224733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.742 [2024-12-06 04:14:20.247326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.742 [2024-12-06 04:14:20.247357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:32.742 [2024-12-06 04:14:20.247367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.553 ms 00:23:32.742 [2024-12-06 04:14:20.247374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.742 [2024-12-06 04:14:20.247391] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:32.742 [2024-12-06 04:14:20.247408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:32.742 [2024-12-06 04:14:20.247629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.247995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:32.743 [2024-12-06 04:14:20.248183] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:32.743 [2024-12-06 04:14:20.248190] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3668eabb-1b54-40a7-857e-301a1d6d2e94 00:23:32.743 [2024-12-06 04:14:20.248198] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:32.743 [2024-12-06 04:14:20.248205] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:32.743 [2024-12-06 04:14:20.248211] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:32.743 [2024-12-06 04:14:20.248218] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:32.743 [2024-12-06 04:14:20.248230] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:32.743 [2024-12-06 04:14:20.248237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:32.743 [2024-12-06 04:14:20.248245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:32.743 [2024-12-06 04:14:20.248251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:32.743 [2024-12-06 04:14:20.248257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:32.743 [2024-12-06 04:14:20.248264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.743 [2024-12-06 04:14:20.248272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:32.743 [2024-12-06 04:14:20.248280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:23:32.743 [2024-12-06 04:14:20.248289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.743 [2024-12-06 04:14:20.260564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.743 [2024-12-06 04:14:20.260594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:32.743 [2024-12-06 04:14:20.260605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.260 ms 00:23:32.743 [2024-12-06 04:14:20.260613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.743 [2024-12-06 04:14:20.260972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.743 [2024-12-06 04:14:20.260985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:32.743 [2024-12-06 04:14:20.260997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:23:32.743 [2024-12-06 04:14:20.261004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.003 [2024-12-06 04:14:20.293686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.003 [2024-12-06 04:14:20.293730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:33.003 [2024-12-06 04:14:20.293740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.003 [2024-12-06 04:14:20.293747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.003 [2024-12-06 04:14:20.293793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.003 [2024-12-06 04:14:20.293801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:33.003 [2024-12-06 04:14:20.293813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.003 [2024-12-06 04:14:20.293821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.003 [2024-12-06 04:14:20.293872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.003 [2024-12-06 04:14:20.293882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:33.003 [2024-12-06 04:14:20.293890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.003 [2024-12-06 04:14:20.293897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.003 [2024-12-06 04:14:20.293911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.003 [2024-12-06 04:14:20.293918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:33.004 [2024-12-06 04:14:20.293925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.004 [2024-12-06 04:14:20.293935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.004 [2024-12-06 04:14:20.370841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.004 [2024-12-06 04:14:20.370879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.004 [2024-12-06 04:14:20.370890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.004 [2024-12-06 04:14:20.370898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.004 [2024-12-06 04:14:20.433927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.004 [2024-12-06 04:14:20.433966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.004 [2024-12-06 04:14:20.433980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.004 [2024-12-06 04:14:20.433988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.004 [2024-12-06 04:14:20.434047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.004 [2024-12-06 04:14:20.434056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.004 [2024-12-06 04:14:20.434064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.004 [2024-12-06 04:14:20.434071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.004 [2024-12-06 04:14:20.434105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.004 [2024-12-06 04:14:20.434114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.004 [2024-12-06 04:14:20.434122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.004 [2024-12-06 04:14:20.434129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.004 [2024-12-06 04:14:20.434213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.004 [2024-12-06 04:14:20.434221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.004 [2024-12-06 04:14:20.434229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.004 [2024-12-06 04:14:20.434237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.004 [2024-12-06 04:14:20.434263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.004 [2024-12-06 04:14:20.434271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:33.004 [2024-12-06 04:14:20.434279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.004 [2024-12-06 04:14:20.434286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.004 [2024-12-06 04:14:20.434321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.004 [2024-12-06 04:14:20.434329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.004 [2024-12-06 04:14:20.434337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.004 [2024-12-06 04:14:20.434344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.004 [2024-12-06 04:14:20.434380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.004 [2024-12-06 04:14:20.434389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.004 [2024-12-06 04:14:20.434397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.004 [2024-12-06 04:14:20.434404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.004 [2024-12-06 04:14:20.434529] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.945 ms, result 0 00:23:33.571 00:23:33.571 00:23:33.830 04:14:21 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:35.731 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:35.731 04:14:23 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:35.731 [2024-12-06 04:14:23.203598] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:23:35.731 [2024-12-06 04:14:23.203842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77981 ] 00:23:35.990 [2024-12-06 04:14:23.357680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.990 [2024-12-06 04:14:23.450191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.248 [2024-12-06 04:14:23.704670] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.248 [2024-12-06 04:14:23.704751] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.508 [2024-12-06 04:14:23.857594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.508 [2024-12-06 04:14:23.857642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.508 [2024-12-06 04:14:23.857654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:36.508 [2024-12-06 04:14:23.857662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.508 [2024-12-06 04:14:23.857704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.508 [2024-12-06 04:14:23.857734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.508 [2024-12-06 04:14:23.857743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:36.508 [2024-12-06 04:14:23.857750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.508 [2024-12-06 04:14:23.857767] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.508 [2024-12-06 04:14:23.858481] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.508 [2024-12-06 04:14:23.858502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.508 [2024-12-06 04:14:23.858509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.508 [2024-12-06 04:14:23.858517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:23:36.508 [2024-12-06 04:14:23.858525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.508 [2024-12-06 04:14:23.859593] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:36.508 [2024-12-06 04:14:23.871634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.508 [2024-12-06 04:14:23.871667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:36.508 [2024-12-06 04:14:23.871678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.043 ms 00:23:36.508 [2024-12-06 04:14:23.871686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.508 [2024-12-06 04:14:23.871753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.508 [2024-12-06 04:14:23.871763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:36.508 [2024-12-06 04:14:23.871771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:36.508 [2024-12-06 04:14:23.871778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.508 [2024-12-06 04:14:23.876480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.508 [2024-12-06 04:14:23.876510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.508 [2024-12-06 04:14:23.876519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.645 ms 00:23:36.508 [2024-12-06 04:14:23.876530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.508 [2024-12-06 04:14:23.876597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.508 [2024-12-06 04:14:23.876605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.508 [2024-12-06 04:14:23.876613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:36.508 [2024-12-06 04:14:23.876620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.508 [2024-12-06 04:14:23.876659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.508 [2024-12-06 04:14:23.876668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.508 [2024-12-06 04:14:23.876676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:36.508 [2024-12-06 04:14:23.876683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.508 [2024-12-06 04:14:23.876705] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.508 [2024-12-06 04:14:23.880022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.508 [2024-12-06 04:14:23.880060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.508 [2024-12-06 04:14:23.880072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.322 ms 00:23:36.508 [2024-12-06 04:14:23.880079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.508 [2024-12-06 04:14:23.880113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.508 [2024-12-06 04:14:23.880121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.508 [2024-12-06 04:14:23.880129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:36.508 [2024-12-06 04:14:23.880136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.508 [2024-12-06 04:14:23.880154] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:36.508 [2024-12-06 04:14:23.880173] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:36.508 [2024-12-06 04:14:23.880205] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:36.508 [2024-12-06 04:14:23.880222] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:36.508 [2024-12-06 04:14:23.880324] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.508 [2024-12-06 04:14:23.880334] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.508 [2024-12-06 04:14:23.880344] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:36.508 [2024-12-06 04:14:23.880354] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.508 [2024-12-06 04:14:23.880362] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.508 [2024-12-06 04:14:23.880370] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:36.508 [2024-12-06 04:14:23.880377] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.508 [2024-12-06 04:14:23.880386] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.508 [2024-12-06 04:14:23.880393] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.508 [2024-12-06 04:14:23.880400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.508 [2024-12-06 04:14:23.880408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.508 [2024-12-06 04:14:23.880415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:23:36.508 [2024-12-06 04:14:23.880422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.508 [2024-12-06 04:14:23.880503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.508 [2024-12-06 04:14:23.880511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.508 [2024-12-06 04:14:23.880518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:36.508 [2024-12-06 04:14:23.880525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.508 [2024-12-06 04:14:23.880627] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.508 [2024-12-06 04:14:23.880637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.508 [2024-12-06 04:14:23.880644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.508 [2024-12-06 04:14:23.880652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.508 [2024-12-06 04:14:23.880659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.508 [2024-12-06 04:14:23.880666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.508 [2024-12-06 04:14:23.880672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:36.508 [2024-12-06 04:14:23.880680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.508 [2024-12-06 04:14:23.880686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:36.508 [2024-12-06 04:14:23.880693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.508 [2024-12-06 04:14:23.880699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.508 [2024-12-06 04:14:23.880706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:36.508 [2024-12-06 04:14:23.880712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.508 [2024-12-06 04:14:23.880743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.508 [2024-12-06 04:14:23.880750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:36.508 [2024-12-06 04:14:23.880757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.508 [2024-12-06 04:14:23.880763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.508 [2024-12-06 04:14:23.880769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:36.508 [2024-12-06 04:14:23.880777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.508 [2024-12-06 04:14:23.880784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.508 [2024-12-06 04:14:23.880790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:36.508 [2024-12-06 04:14:23.880797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.508 [2024-12-06 04:14:23.880803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.508 [2024-12-06 04:14:23.880810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:36.508 [2024-12-06 04:14:23.880816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.508 [2024-12-06 04:14:23.880823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.509 [2024-12-06 04:14:23.880829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:36.509 [2024-12-06 04:14:23.880835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.509 [2024-12-06 04:14:23.880841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.509 [2024-12-06 04:14:23.880848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:36.509 [2024-12-06 04:14:23.880854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.509 [2024-12-06 04:14:23.880861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.509 [2024-12-06 04:14:23.880867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:36.509 [2024-12-06 04:14:23.880873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.509 [2024-12-06 04:14:23.880880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.509 [2024-12-06 04:14:23.880886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:36.509 [2024-12-06 04:14:23.880892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.509 [2024-12-06 04:14:23.880899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.509 [2024-12-06 04:14:23.880905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:36.509 [2024-12-06 04:14:23.880911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.509 [2024-12-06 04:14:23.880917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.509 [2024-12-06 04:14:23.880924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:36.509 [2024-12-06 04:14:23.880931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.509 [2024-12-06 04:14:23.880937] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.509 [2024-12-06 04:14:23.880944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.509 [2024-12-06 04:14:23.880951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.509 [2024-12-06 04:14:23.880958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.509 [2024-12-06 04:14:23.880965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.509 [2024-12-06 04:14:23.880972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.509 [2024-12-06 04:14:23.880978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.509 [2024-12-06 04:14:23.880986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.509 [2024-12-06 04:14:23.880992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.509 [2024-12-06 04:14:23.880999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.509 [2024-12-06 04:14:23.881007] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.509 [2024-12-06 04:14:23.881015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.509 [2024-12-06 04:14:23.881025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:36.509 [2024-12-06 04:14:23.881033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:36.509 [2024-12-06 04:14:23.881039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:36.509 [2024-12-06 04:14:23.881046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:36.509 [2024-12-06 04:14:23.881053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:36.509 [2024-12-06 04:14:23.881060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:36.509 [2024-12-06 04:14:23.881067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:36.509 [2024-12-06 04:14:23.881074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:36.509 [2024-12-06 04:14:23.881080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:36.509 [2024-12-06 04:14:23.881087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:36.509 [2024-12-06 04:14:23.881094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:36.509 [2024-12-06 04:14:23.881101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:36.509 [2024-12-06 04:14:23.881108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:36.509 [2024-12-06 04:14:23.881115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:36.509 [2024-12-06 04:14:23.881122] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.509 [2024-12-06 04:14:23.881130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.509 [2024-12-06 04:14:23.881138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.509 [2024-12-06 04:14:23.881145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.509 [2024-12-06 04:14:23.881152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.509 [2024-12-06 04:14:23.881159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.509 [2024-12-06 04:14:23.881167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:23.881173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.509 [2024-12-06 04:14:23.881181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:23:36.509 [2024-12-06 04:14:23.881188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.509 [2024-12-06 04:14:23.906810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:23.906955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.509 [2024-12-06 04:14:23.906971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.568 ms 00:23:36.509 [2024-12-06 04:14:23.906983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.509 [2024-12-06 04:14:23.907065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:23.907073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:36.509 [2024-12-06 04:14:23.907080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:36.509 [2024-12-06 04:14:23.907088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.509 [2024-12-06 04:14:23.958201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:23.958238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.509 [2024-12-06 04:14:23.958250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.066 ms 00:23:36.509 [2024-12-06 04:14:23.958258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.509 [2024-12-06 04:14:23.958295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:23.958305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.509 [2024-12-06 04:14:23.958316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:36.509 [2024-12-06 04:14:23.958323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.509 [2024-12-06 04:14:23.958699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:23.958742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.509 [2024-12-06 04:14:23.958752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:23:36.509 [2024-12-06 04:14:23.958760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.509 [2024-12-06 04:14:23.958879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:23.958892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.509 [2024-12-06 04:14:23.958904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:23:36.509 [2024-12-06 04:14:23.958911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.509 [2024-12-06 04:14:23.971688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:23.971740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.509 [2024-12-06 04:14:23.971751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.757 ms 00:23:36.509 [2024-12-06 04:14:23.971758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.509 [2024-12-06 04:14:23.983896] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:36.509 [2024-12-06 04:14:23.984050] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:36.509 [2024-12-06 04:14:23.984065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:23.984073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:36.509 [2024-12-06 04:14:23.984082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.219 ms 00:23:36.509 [2024-12-06 04:14:23.984089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.509 [2024-12-06 04:14:24.008105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:24.008138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:36.509 [2024-12-06 04:14:24.008149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.983 ms 00:23:36.509 [2024-12-06 04:14:24.008156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.509 [2024-12-06 04:14:24.019419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:24.019548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:36.509 [2024-12-06 04:14:24.019564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.220 ms 00:23:36.509 [2024-12-06 04:14:24.019571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.509 [2024-12-06 04:14:24.030742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:24.030771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:36.509 [2024-12-06 04:14:24.030781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.144 ms 00:23:36.509 [2024-12-06 04:14:24.030788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.509 [2024-12-06 04:14:24.031376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.509 [2024-12-06 04:14:24.031400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:36.510 [2024-12-06 04:14:24.031412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:23:36.510 [2024-12-06 04:14:24.031420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.768 [2024-12-06 04:14:24.085925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.769 [2024-12-06 04:14:24.086076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:36.769 [2024-12-06 04:14:24.086100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.488 ms 00:23:36.769 [2024-12-06 04:14:24.086108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.769 [2024-12-06 04:14:24.096211] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:36.769 [2024-12-06 04:14:24.098303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.769 [2024-12-06 04:14:24.098331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:36.769 [2024-12-06 04:14:24.098342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.162 ms 00:23:36.769 [2024-12-06 04:14:24.098351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.769 [2024-12-06 04:14:24.098430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.769 [2024-12-06 04:14:24.098441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:36.769 [2024-12-06 04:14:24.098453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:36.769 [2024-12-06 04:14:24.098470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.769 [2024-12-06 04:14:24.098535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.769 [2024-12-06 04:14:24.098547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:36.769 [2024-12-06 04:14:24.098556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:36.769 [2024-12-06 04:14:24.098565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.769 [2024-12-06 04:14:24.098584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.769 [2024-12-06 04:14:24.098593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:36.769 [2024-12-06 04:14:24.098602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:36.769 [2024-12-06 04:14:24.098609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.769 [2024-12-06 04:14:24.098640] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:36.769 [2024-12-06 04:14:24.098650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.769 [2024-12-06 04:14:24.098657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:36.769 [2024-12-06 04:14:24.098664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:36.769 [2024-12-06 04:14:24.098671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.769 [2024-12-06 04:14:24.121932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.769 [2024-12-06 04:14:24.121965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:36.769 [2024-12-06 04:14:24.121980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.244 ms 00:23:36.769 [2024-12-06 04:14:24.121988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.769 [2024-12-06 04:14:24.122053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.769 [2024-12-06 04:14:24.122062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:36.769 [2024-12-06 04:14:24.122070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:36.769 [2024-12-06 04:14:24.122077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.769 [2024-12-06 04:14:24.122958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.942 ms, result 0 00:23:37.703  [2024-12-06T04:14:26.161Z] Copying: 44/1024 [MB] (44 MBps) [2024-12-06T04:14:27.536Z] Copying: 90/1024 [MB] (45 MBps) [2024-12-06T04:14:28.470Z] Copying: 136/1024 [MB] (45 MBps) [2024-12-06T04:14:29.405Z] Copying: 181/1024 [MB] (45 MBps) [2024-12-06T04:14:30.339Z] Copying: 234/1024 [MB] (53 MBps) [2024-12-06T04:14:31.275Z] Copying: 279/1024 [MB] (45 MBps) [2024-12-06T04:14:32.210Z] Copying: 324/1024 [MB] (45 MBps) [2024-12-06T04:14:33.146Z] Copying: 369/1024 [MB] (44 MBps) [2024-12-06T04:14:34.524Z] Copying: 415/1024 [MB] (45 MBps) [2024-12-06T04:14:35.466Z] Copying: 463/1024 [MB] (47 MBps) [2024-12-06T04:14:36.399Z] Copying: 490/1024 [MB] (27 MBps) [2024-12-06T04:14:37.334Z] Copying: 515/1024 [MB] (25 MBps) [2024-12-06T04:14:38.276Z] Copying: 538/1024 [MB] (23 MBps) [2024-12-06T04:14:39.217Z] Copying: 568/1024 [MB] (29 MBps) [2024-12-06T04:14:40.154Z] Copying: 608/1024 [MB] (40 MBps) [2024-12-06T04:14:41.561Z] Copying: 653/1024 [MB] (45 MBps) [2024-12-06T04:14:42.148Z] Copying: 702/1024 [MB] (48 MBps) [2024-12-06T04:14:43.525Z] Copying: 756/1024 [MB] (53 MBps) [2024-12-06T04:14:44.460Z] Copying: 804/1024 [MB] (48 MBps) [2024-12-06T04:14:45.396Z] Copying: 844/1024 [MB] (39 MBps) [2024-12-06T04:14:46.330Z] Copying: 890/1024 [MB] (46 MBps) [2024-12-06T04:14:47.264Z] Copying: 936/1024 [MB] (45 MBps) [2024-12-06T04:14:48.201Z] Copying: 981/1024 [MB] (45 MBps) [2024-12-06T04:14:49.137Z] Copying: 1023/1024 [MB] (41 MBps) [2024-12-06T04:14:49.137Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-12-06 04:14:49.118625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.610 [2024-12-06 04:14:49.118681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:01.610 [2024-12-06 04:14:49.118701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:01.610 [2024-12-06 04:14:49.118710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.610 [2024-12-06 04:14:49.121850] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:01.610 [2024-12-06 04:14:49.127266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.610 [2024-12-06 04:14:49.127298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:01.610 [2024-12-06 04:14:49.127309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.379 ms 00:24:01.610 [2024-12-06 04:14:49.127317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.871 [2024-12-06 04:14:49.137849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.871 [2024-12-06 04:14:49.137884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:01.871 [2024-12-06 04:14:49.137894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.602 ms 00:24:01.871 [2024-12-06 04:14:49.137906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.871 [2024-12-06 04:14:49.155610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.871 [2024-12-06 04:14:49.155669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:01.871 [2024-12-06 04:14:49.155682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.688 ms 00:24:01.871 [2024-12-06 04:14:49.155691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.871 [2024-12-06 04:14:49.161831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.871 [2024-12-06 04:14:49.161861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:01.871 [2024-12-06 04:14:49.161871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.095 ms 00:24:01.871 [2024-12-06 04:14:49.161886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.871 [2024-12-06 04:14:49.185793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.871 [2024-12-06 04:14:49.185839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:01.871 [2024-12-06 04:14:49.185850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.854 ms 00:24:01.871 [2024-12-06 04:14:49.185858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.871 [2024-12-06 04:14:49.200124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.871 [2024-12-06 04:14:49.200166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:01.871 [2024-12-06 04:14:49.200177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.229 ms 00:24:01.871 [2024-12-06 04:14:49.200185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.871 [2024-12-06 04:14:49.257829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.871 [2024-12-06 04:14:49.257900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:01.871 [2024-12-06 04:14:49.257913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.603 ms 00:24:01.871 [2024-12-06 04:14:49.257920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.871 [2024-12-06 04:14:49.281808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.871 [2024-12-06 04:14:49.281848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:01.871 [2024-12-06 04:14:49.281861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.872 ms 00:24:01.871 [2024-12-06 04:14:49.281868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.871 [2024-12-06 04:14:49.304470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.871 [2024-12-06 04:14:49.304499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:01.871 [2024-12-06 04:14:49.304509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.568 ms 00:24:01.871 [2024-12-06 04:14:49.304516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.871 [2024-12-06 04:14:49.326588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.871 [2024-12-06 04:14:49.326618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:01.871 [2024-12-06 04:14:49.326628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.042 ms 00:24:01.871 [2024-12-06 04:14:49.326635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.871 [2024-12-06 04:14:49.349004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.871 [2024-12-06 04:14:49.349033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:01.871 [2024-12-06 04:14:49.349043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.317 ms 00:24:01.871 [2024-12-06 04:14:49.349049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.871 [2024-12-06 04:14:49.349079] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:01.871 [2024-12-06 04:14:49.349093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 123392 / 261120 wr_cnt: 1 state: open 00:24:01.871 [2024-12-06 04:14:49.349103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:01.871 [2024-12-06 04:14:49.349314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:01.872 [2024-12-06 04:14:49.349886] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:01.872 [2024-12-06 04:14:49.349894] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3668eabb-1b54-40a7-857e-301a1d6d2e94 00:24:01.872 [2024-12-06 04:14:49.349903] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 123392 00:24:01.872 [2024-12-06 04:14:49.349909] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 124352 00:24:01.872 [2024-12-06 04:14:49.349916] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 123392 00:24:01.872 [2024-12-06 04:14:49.349924] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0078 00:24:01.872 [2024-12-06 04:14:49.349939] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:01.872 [2024-12-06 04:14:49.349947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:01.872 [2024-12-06 04:14:49.349954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:01.872 [2024-12-06 04:14:49.349960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:01.872 [2024-12-06 04:14:49.349967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:01.872 [2024-12-06 04:14:49.349973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.872 [2024-12-06 04:14:49.349981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:01.872 [2024-12-06 04:14:49.349989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:24:01.872 [2024-12-06 04:14:49.349996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.872 [2024-12-06 04:14:49.362370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.872 [2024-12-06 04:14:49.362400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:01.872 [2024-12-06 04:14:49.362415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.359 ms 00:24:01.872 [2024-12-06 04:14:49.362422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.872 [2024-12-06 04:14:49.362791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.872 [2024-12-06 04:14:49.362802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:01.872 [2024-12-06 04:14:49.362810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:24:01.872 [2024-12-06 04:14:49.362817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.873 [2024-12-06 04:14:49.395252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.873 [2024-12-06 04:14:49.395284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:01.873 [2024-12-06 04:14:49.395293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.873 [2024-12-06 04:14:49.395301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.873 [2024-12-06 04:14:49.395348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.873 [2024-12-06 04:14:49.395355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:01.873 [2024-12-06 04:14:49.395367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.873 [2024-12-06 04:14:49.395374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.873 [2024-12-06 04:14:49.395425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.873 [2024-12-06 04:14:49.395438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:01.873 [2024-12-06 04:14:49.395445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.873 [2024-12-06 04:14:49.395453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.873 [2024-12-06 04:14:49.395467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.873 [2024-12-06 04:14:49.395474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:01.873 [2024-12-06 04:14:49.395481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.873 [2024-12-06 04:14:49.395488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.132 [2024-12-06 04:14:49.470584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.132 [2024-12-06 04:14:49.470625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.132 [2024-12-06 04:14:49.470635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.132 [2024-12-06 04:14:49.470642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.132 [2024-12-06 04:14:49.532609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.132 [2024-12-06 04:14:49.532655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.132 [2024-12-06 04:14:49.532666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.132 [2024-12-06 04:14:49.532673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.132 [2024-12-06 04:14:49.532760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.132 [2024-12-06 04:14:49.532771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.132 [2024-12-06 04:14:49.532795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.132 [2024-12-06 04:14:49.532805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.132 [2024-12-06 04:14:49.532838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.132 [2024-12-06 04:14:49.532847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.132 [2024-12-06 04:14:49.532855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.132 [2024-12-06 04:14:49.532863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.132 [2024-12-06 04:14:49.532944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.132 [2024-12-06 04:14:49.532954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.132 [2024-12-06 04:14:49.532961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.132 [2024-12-06 04:14:49.532971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.132 [2024-12-06 04:14:49.532998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.132 [2024-12-06 04:14:49.533007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:02.132 [2024-12-06 04:14:49.533014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.132 [2024-12-06 04:14:49.533021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.132 [2024-12-06 04:14:49.533054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.132 [2024-12-06 04:14:49.533062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.132 [2024-12-06 04:14:49.533070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.132 [2024-12-06 04:14:49.533078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.132 [2024-12-06 04:14:49.533119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.132 [2024-12-06 04:14:49.533128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.132 [2024-12-06 04:14:49.533135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.132 [2024-12-06 04:14:49.533143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.132 [2024-12-06 04:14:49.533250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 416.596 ms, result 0 00:24:04.664 00:24:04.664 00:24:04.664 04:14:51 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:04.664 [2024-12-06 04:14:52.001167] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:24:04.664 [2024-12-06 04:14:52.001289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78272 ] 00:24:04.664 [2024-12-06 04:14:52.158702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.940 [2024-12-06 04:14:52.256680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.198 [2024-12-06 04:14:52.511676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:05.198 [2024-12-06 04:14:52.511760] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:05.198 [2024-12-06 04:14:52.665733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.198 [2024-12-06 04:14:52.665791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:05.198 [2024-12-06 04:14:52.665805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:05.198 [2024-12-06 04:14:52.665813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.198 [2024-12-06 04:14:52.665861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.198 [2024-12-06 04:14:52.665873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:05.198 [2024-12-06 04:14:52.665881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:05.198 [2024-12-06 04:14:52.665888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.198 [2024-12-06 04:14:52.665908] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:05.198 [2024-12-06 04:14:52.666584] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:05.198 [2024-12-06 04:14:52.666601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.198 [2024-12-06 04:14:52.666609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:05.198 [2024-12-06 04:14:52.666617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:24:05.198 [2024-12-06 04:14:52.666624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.198 [2024-12-06 04:14:52.667753] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:05.198 [2024-12-06 04:14:52.679961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.198 [2024-12-06 04:14:52.679996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:05.198 [2024-12-06 04:14:52.680008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.209 ms 00:24:05.198 [2024-12-06 04:14:52.680015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.198 [2024-12-06 04:14:52.680069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.198 [2024-12-06 04:14:52.680078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:05.199 [2024-12-06 04:14:52.680086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:05.199 [2024-12-06 04:14:52.680094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.199 [2024-12-06 04:14:52.684823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.199 [2024-12-06 04:14:52.684853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:05.199 [2024-12-06 04:14:52.684862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.675 ms 00:24:05.199 [2024-12-06 04:14:52.684873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.199 [2024-12-06 04:14:52.684937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.199 [2024-12-06 04:14:52.684946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:05.199 [2024-12-06 04:14:52.684954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:05.199 [2024-12-06 04:14:52.684961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.199 [2024-12-06 04:14:52.685002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.199 [2024-12-06 04:14:52.685011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:05.199 [2024-12-06 04:14:52.685019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:05.199 [2024-12-06 04:14:52.685026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.199 [2024-12-06 04:14:52.685050] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:05.199 [2024-12-06 04:14:52.688297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.199 [2024-12-06 04:14:52.688325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:05.199 [2024-12-06 04:14:52.688336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.252 ms 00:24:05.199 [2024-12-06 04:14:52.688343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.199 [2024-12-06 04:14:52.688371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.199 [2024-12-06 04:14:52.688379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:05.199 [2024-12-06 04:14:52.688388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:05.199 [2024-12-06 04:14:52.688396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.199 [2024-12-06 04:14:52.688414] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:05.199 [2024-12-06 04:14:52.688432] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:05.199 [2024-12-06 04:14:52.688465] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:05.199 [2024-12-06 04:14:52.688482] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:05.199 [2024-12-06 04:14:52.688583] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:05.199 [2024-12-06 04:14:52.688593] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:05.199 [2024-12-06 04:14:52.688603] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:05.199 [2024-12-06 04:14:52.688612] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:05.199 [2024-12-06 04:14:52.688621] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:05.199 [2024-12-06 04:14:52.688628] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:05.199 [2024-12-06 04:14:52.688636] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:05.199 [2024-12-06 04:14:52.688645] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:05.199 [2024-12-06 04:14:52.688652] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:05.199 [2024-12-06 04:14:52.688659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.199 [2024-12-06 04:14:52.688666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:05.199 [2024-12-06 04:14:52.688673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:24:05.199 [2024-12-06 04:14:52.688680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.199 [2024-12-06 04:14:52.688777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.199 [2024-12-06 04:14:52.688785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:05.199 [2024-12-06 04:14:52.688793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:24:05.199 [2024-12-06 04:14:52.688800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.199 [2024-12-06 04:14:52.688901] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:05.199 [2024-12-06 04:14:52.688915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:05.199 [2024-12-06 04:14:52.688923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:05.199 [2024-12-06 04:14:52.688930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.199 [2024-12-06 04:14:52.688937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:05.199 [2024-12-06 04:14:52.688944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:05.199 [2024-12-06 04:14:52.688951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:05.199 [2024-12-06 04:14:52.688959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:05.199 [2024-12-06 04:14:52.688965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:05.199 [2024-12-06 04:14:52.688972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:05.199 [2024-12-06 04:14:52.688978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:05.199 [2024-12-06 04:14:52.688985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:05.199 [2024-12-06 04:14:52.688991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:05.199 [2024-12-06 04:14:52.689003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:05.199 [2024-12-06 04:14:52.689010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:05.199 [2024-12-06 04:14:52.689016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.199 [2024-12-06 04:14:52.689022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:05.199 [2024-12-06 04:14:52.689029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:05.199 [2024-12-06 04:14:52.689037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.199 [2024-12-06 04:14:52.689044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:05.199 [2024-12-06 04:14:52.689051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:05.199 [2024-12-06 04:14:52.689058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:05.199 [2024-12-06 04:14:52.689064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:05.199 [2024-12-06 04:14:52.689070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:05.199 [2024-12-06 04:14:52.689077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:05.199 [2024-12-06 04:14:52.689083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:05.199 [2024-12-06 04:14:52.689089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:05.199 [2024-12-06 04:14:52.689095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:05.199 [2024-12-06 04:14:52.689102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:05.199 [2024-12-06 04:14:52.689108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:05.199 [2024-12-06 04:14:52.689114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:05.199 [2024-12-06 04:14:52.689120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:05.199 [2024-12-06 04:14:52.689126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:05.199 [2024-12-06 04:14:52.689132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:05.199 [2024-12-06 04:14:52.689139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:05.199 [2024-12-06 04:14:52.689145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:05.199 [2024-12-06 04:14:52.689151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:05.199 [2024-12-06 04:14:52.689157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:05.199 [2024-12-06 04:14:52.689163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:05.199 [2024-12-06 04:14:52.689169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.199 [2024-12-06 04:14:52.689175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:05.199 [2024-12-06 04:14:52.689182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:05.199 [2024-12-06 04:14:52.689188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.199 [2024-12-06 04:14:52.689195] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:05.199 [2024-12-06 04:14:52.689202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:05.199 [2024-12-06 04:14:52.689209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:05.199 [2024-12-06 04:14:52.689216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.199 [2024-12-06 04:14:52.689223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:05.199 [2024-12-06 04:14:52.689229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:05.199 [2024-12-06 04:14:52.689236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:05.199 [2024-12-06 04:14:52.689244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:05.199 [2024-12-06 04:14:52.689250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:05.199 [2024-12-06 04:14:52.689256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:05.199 [2024-12-06 04:14:52.689264] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:05.199 [2024-12-06 04:14:52.689273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:05.199 [2024-12-06 04:14:52.689285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:05.199 [2024-12-06 04:14:52.689292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:05.199 [2024-12-06 04:14:52.689299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:05.199 [2024-12-06 04:14:52.689306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:05.199 [2024-12-06 04:14:52.689312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:05.199 [2024-12-06 04:14:52.689319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:05.199 [2024-12-06 04:14:52.689326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:05.199 [2024-12-06 04:14:52.689333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:05.199 [2024-12-06 04:14:52.689340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:05.199 [2024-12-06 04:14:52.689346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:05.199 [2024-12-06 04:14:52.689353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:05.199 [2024-12-06 04:14:52.689360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:05.199 [2024-12-06 04:14:52.689367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:05.199 [2024-12-06 04:14:52.689374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:05.199 [2024-12-06 04:14:52.689381] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:05.199 [2024-12-06 04:14:52.689389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:05.199 [2024-12-06 04:14:52.689398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:05.199 [2024-12-06 04:14:52.689406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:05.199 [2024-12-06 04:14:52.689414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:05.199 [2024-12-06 04:14:52.689421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:05.199 [2024-12-06 04:14:52.689428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.199 [2024-12-06 04:14:52.689435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:05.199 [2024-12-06 04:14:52.689442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:24:05.199 [2024-12-06 04:14:52.689449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.199 [2024-12-06 04:14:52.715088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.199 [2024-12-06 04:14:52.715229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:05.199 [2024-12-06 04:14:52.715283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.586 ms 00:24:05.199 [2024-12-06 04:14:52.715312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.199 [2024-12-06 04:14:52.715408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.199 [2024-12-06 04:14:52.715428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:05.199 [2024-12-06 04:14:52.715447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:05.199 [2024-12-06 04:14:52.715466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.457 [2024-12-06 04:14:52.757426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.457 [2024-12-06 04:14:52.757590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:05.457 [2024-12-06 04:14:52.757651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.857 ms 00:24:05.457 [2024-12-06 04:14:52.757675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.457 [2024-12-06 04:14:52.757743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.457 [2024-12-06 04:14:52.757771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:05.457 [2024-12-06 04:14:52.757796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:05.457 [2024-12-06 04:14:52.757815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.457 [2024-12-06 04:14:52.758182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.457 [2024-12-06 04:14:52.758289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:05.457 [2024-12-06 04:14:52.758351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:24:05.457 [2024-12-06 04:14:52.758374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.457 [2024-12-06 04:14:52.758524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.457 [2024-12-06 04:14:52.758554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:05.457 [2024-12-06 04:14:52.758641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:24:05.457 [2024-12-06 04:14:52.758666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.457 [2024-12-06 04:14:52.771494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.457 [2024-12-06 04:14:52.771618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:05.457 [2024-12-06 04:14:52.771668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.794 ms 00:24:05.457 [2024-12-06 04:14:52.771689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.457 [2024-12-06 04:14:52.783733] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:05.457 [2024-12-06 04:14:52.783856] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:05.457 [2024-12-06 04:14:52.783913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.457 [2024-12-06 04:14:52.783934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:05.457 [2024-12-06 04:14:52.783953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.106 ms 00:24:05.457 [2024-12-06 04:14:52.783971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.457 [2024-12-06 04:14:52.808369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.457 [2024-12-06 04:14:52.808491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:05.457 [2024-12-06 04:14:52.808539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.355 ms 00:24:05.457 [2024-12-06 04:14:52.808561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.457 [2024-12-06 04:14:52.820359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.457 [2024-12-06 04:14:52.820494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:05.457 [2024-12-06 04:14:52.820549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.494 ms 00:24:05.457 [2024-12-06 04:14:52.820573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.457 [2024-12-06 04:14:52.831825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.457 [2024-12-06 04:14:52.831948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:05.457 [2024-12-06 04:14:52.831996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.210 ms 00:24:05.457 [2024-12-06 04:14:52.832017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.457 [2024-12-06 04:14:52.832628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.457 [2024-12-06 04:14:52.832727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:05.457 [2024-12-06 04:14:52.832781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:24:05.457 [2024-12-06 04:14:52.832803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.458 [2024-12-06 04:14:52.886767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.458 [2024-12-06 04:14:52.886942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:05.458 [2024-12-06 04:14:52.887000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.933 ms 00:24:05.458 [2024-12-06 04:14:52.887023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.458 [2024-12-06 04:14:52.897584] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:05.458 [2024-12-06 04:14:52.900075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.458 [2024-12-06 04:14:52.900185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:05.458 [2024-12-06 04:14:52.900239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.758 ms 00:24:05.458 [2024-12-06 04:14:52.900261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.458 [2024-12-06 04:14:52.900377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.458 [2024-12-06 04:14:52.900746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:05.458 [2024-12-06 04:14:52.900777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:05.458 [2024-12-06 04:14:52.900786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.458 [2024-12-06 04:14:52.902263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.458 [2024-12-06 04:14:52.902354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:05.458 [2024-12-06 04:14:52.902405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:24:05.458 [2024-12-06 04:14:52.902426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.458 [2024-12-06 04:14:52.902477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.458 [2024-12-06 04:14:52.902500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:05.458 [2024-12-06 04:14:52.902556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:05.458 [2024-12-06 04:14:52.902577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.458 [2024-12-06 04:14:52.902626] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:05.458 [2024-12-06 04:14:52.902649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.458 [2024-12-06 04:14:52.902701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:05.458 [2024-12-06 04:14:52.902735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:05.458 [2024-12-06 04:14:52.902754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.458 [2024-12-06 04:14:52.925997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.458 [2024-12-06 04:14:52.926116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:05.458 [2024-12-06 04:14:52.926177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.209 ms 00:24:05.458 [2024-12-06 04:14:52.926199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.458 [2024-12-06 04:14:52.926518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.458 [2024-12-06 04:14:52.926580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:05.458 [2024-12-06 04:14:52.926655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:05.458 [2024-12-06 04:14:52.926678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.458 [2024-12-06 04:14:52.927713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 261.560 ms, result 0 00:24:06.831  [2024-12-06T04:14:55.292Z] Copying: 43/1024 [MB] (43 MBps) [2024-12-06T04:14:56.224Z] Copying: 89/1024 [MB] (46 MBps) [2024-12-06T04:14:57.158Z] Copying: 128/1024 [MB] (38 MBps) [2024-12-06T04:14:58.532Z] Copying: 150/1024 [MB] (22 MBps) [2024-12-06T04:14:59.464Z] Copying: 168/1024 [MB] (17 MBps) [2024-12-06T04:15:00.399Z] Copying: 185/1024 [MB] (17 MBps) [2024-12-06T04:15:01.334Z] Copying: 221/1024 [MB] (36 MBps) [2024-12-06T04:15:02.267Z] Copying: 233/1024 [MB] (11 MBps) [2024-12-06T04:15:03.199Z] Copying: 259/1024 [MB] (25 MBps) [2024-12-06T04:15:04.131Z] Copying: 278/1024 [MB] (19 MBps) [2024-12-06T04:15:05.504Z] Copying: 296/1024 [MB] (17 MBps) [2024-12-06T04:15:06.433Z] Copying: 309/1024 [MB] (13 MBps) [2024-12-06T04:15:07.363Z] Copying: 331/1024 [MB] (21 MBps) [2024-12-06T04:15:08.299Z] Copying: 363/1024 [MB] (32 MBps) [2024-12-06T04:15:09.234Z] Copying: 395/1024 [MB] (31 MBps) [2024-12-06T04:15:10.173Z] Copying: 414/1024 [MB] (18 MBps) [2024-12-06T04:15:11.112Z] Copying: 436/1024 [MB] (22 MBps) [2024-12-06T04:15:12.486Z] Copying: 452/1024 [MB] (15 MBps) [2024-12-06T04:15:13.420Z] Copying: 472/1024 [MB] (20 MBps) [2024-12-06T04:15:14.354Z] Copying: 493/1024 [MB] (21 MBps) [2024-12-06T04:15:15.293Z] Copying: 540/1024 [MB] (47 MBps) [2024-12-06T04:15:16.233Z] Copying: 589/1024 [MB] (48 MBps) [2024-12-06T04:15:17.165Z] Copying: 637/1024 [MB] (48 MBps) [2024-12-06T04:15:18.539Z] Copying: 658/1024 [MB] (20 MBps) [2024-12-06T04:15:19.474Z] Copying: 669/1024 [MB] (11 MBps) [2024-12-06T04:15:20.410Z] Copying: 687/1024 [MB] (17 MBps) [2024-12-06T04:15:21.351Z] Copying: 708/1024 [MB] (21 MBps) [2024-12-06T04:15:22.297Z] Copying: 729/1024 [MB] (20 MBps) [2024-12-06T04:15:23.230Z] Copying: 744/1024 [MB] (15 MBps) [2024-12-06T04:15:24.164Z] Copying: 760/1024 [MB] (15 MBps) [2024-12-06T04:15:25.539Z] Copying: 786/1024 [MB] (26 MBps) [2024-12-06T04:15:26.469Z] Copying: 806/1024 [MB] (19 MBps) [2024-12-06T04:15:27.400Z] Copying: 824/1024 [MB] (17 MBps) [2024-12-06T04:15:28.330Z] Copying: 835/1024 [MB] (11 MBps) [2024-12-06T04:15:29.262Z] Copying: 855/1024 [MB] (20 MBps) [2024-12-06T04:15:30.195Z] Copying: 905/1024 [MB] (49 MBps) [2024-12-06T04:15:31.131Z] Copying: 957/1024 [MB] (51 MBps) [2024-12-06T04:15:31.697Z] Copying: 1005/1024 [MB] (47 MBps) [2024-12-06T04:15:31.956Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-06 04:15:31.906297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.429 [2024-12-06 04:15:31.906361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:44.429 [2024-12-06 04:15:31.906391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:44.429 [2024-12-06 04:15:31.906402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.429 [2024-12-06 04:15:31.906429] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:44.429 [2024-12-06 04:15:31.909249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.429 [2024-12-06 04:15:31.909286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:44.429 [2024-12-06 04:15:31.909301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.801 ms 00:24:44.429 [2024-12-06 04:15:31.909313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.429 [2024-12-06 04:15:31.909599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.429 [2024-12-06 04:15:31.909624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:44.429 [2024-12-06 04:15:31.909638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:24:44.429 [2024-12-06 04:15:31.909655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.429 [2024-12-06 04:15:31.913734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.429 [2024-12-06 04:15:31.913854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:44.429 [2024-12-06 04:15:31.913875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.056 ms 00:24:44.429 [2024-12-06 04:15:31.913887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.429 [2024-12-06 04:15:31.922476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.429 [2024-12-06 04:15:31.922614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:44.429 [2024-12-06 04:15:31.922709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.541 ms 00:24:44.430 [2024-12-06 04:15:31.922764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.430 [2024-12-06 04:15:31.947398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.430 [2024-12-06 04:15:31.947529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:44.430 [2024-12-06 04:15:31.947588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.480 ms 00:24:44.430 [2024-12-06 04:15:31.947610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.690 [2024-12-06 04:15:31.961321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.690 [2024-12-06 04:15:31.961436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:44.690 [2024-12-06 04:15:31.961492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.671 ms 00:24:44.690 [2024-12-06 04:15:31.961515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.690 [2024-12-06 04:15:32.019026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.690 [2024-12-06 04:15:32.019133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:44.690 [2024-12-06 04:15:32.019187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.469 ms 00:24:44.690 [2024-12-06 04:15:32.019209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.690 [2024-12-06 04:15:32.041769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.690 [2024-12-06 04:15:32.041885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:44.690 [2024-12-06 04:15:32.041931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.533 ms 00:24:44.690 [2024-12-06 04:15:32.041952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.690 [2024-12-06 04:15:32.063952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.690 [2024-12-06 04:15:32.064051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:44.690 [2024-12-06 04:15:32.064097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.962 ms 00:24:44.690 [2024-12-06 04:15:32.064118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.690 [2024-12-06 04:15:32.086251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.690 [2024-12-06 04:15:32.086352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:44.690 [2024-12-06 04:15:32.086397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.096 ms 00:24:44.690 [2024-12-06 04:15:32.086418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.690 [2024-12-06 04:15:32.108535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.690 [2024-12-06 04:15:32.108633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:44.690 [2024-12-06 04:15:32.108678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.050 ms 00:24:44.690 [2024-12-06 04:15:32.108699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.690 [2024-12-06 04:15:32.108790] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:44.690 [2024-12-06 04:15:32.108835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:44.690 [2024-12-06 04:15:32.108866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.108894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.108977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:44.690 [2024-12-06 04:15:32.109637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.109664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.109692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.109729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.109821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.109853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.109881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.109909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.109936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.109964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.110989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.111977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:44.691 [2024-12-06 04:15:32.112459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:44.692 [2024-12-06 04:15:32.112486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:44.692 [2024-12-06 04:15:32.112553] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:44.692 [2024-12-06 04:15:32.112573] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3668eabb-1b54-40a7-857e-301a1d6d2e94 00:24:44.692 [2024-12-06 04:15:32.112602] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:44.692 [2024-12-06 04:15:32.112610] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 8640 00:24:44.692 [2024-12-06 04:15:32.112618] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 7680 00:24:44.692 [2024-12-06 04:15:32.112626] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.1250 00:24:44.692 [2024-12-06 04:15:32.112638] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:44.692 [2024-12-06 04:15:32.112651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:44.692 [2024-12-06 04:15:32.112658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:44.692 [2024-12-06 04:15:32.112665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:44.692 [2024-12-06 04:15:32.112672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:44.692 [2024-12-06 04:15:32.112679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.692 [2024-12-06 04:15:32.112687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:44.692 [2024-12-06 04:15:32.112694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.891 ms 00:24:44.692 [2024-12-06 04:15:32.112701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.692 [2024-12-06 04:15:32.124831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.692 [2024-12-06 04:15:32.124930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:44.692 [2024-12-06 04:15:32.124982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.089 ms 00:24:44.692 [2024-12-06 04:15:32.125003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.692 [2024-12-06 04:15:32.125363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.692 [2024-12-06 04:15:32.125388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:44.692 [2024-12-06 04:15:32.125434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:24:44.692 [2024-12-06 04:15:32.125456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.692 [2024-12-06 04:15:32.157956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.692 [2024-12-06 04:15:32.158066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:44.692 [2024-12-06 04:15:32.158117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.692 [2024-12-06 04:15:32.158139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.692 [2024-12-06 04:15:32.158224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.692 [2024-12-06 04:15:32.158249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:44.692 [2024-12-06 04:15:32.158296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.692 [2024-12-06 04:15:32.158318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.692 [2024-12-06 04:15:32.158385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.692 [2024-12-06 04:15:32.158475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:44.692 [2024-12-06 04:15:32.158500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.692 [2024-12-06 04:15:32.158557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.692 [2024-12-06 04:15:32.158587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.692 [2024-12-06 04:15:32.158607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:44.692 [2024-12-06 04:15:32.158626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.692 [2024-12-06 04:15:32.158677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.951 [2024-12-06 04:15:32.235212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.951 [2024-12-06 04:15:32.235352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:44.951 [2024-12-06 04:15:32.235399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.951 [2024-12-06 04:15:32.235421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.951 [2024-12-06 04:15:32.297894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.951 [2024-12-06 04:15:32.298032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:44.951 [2024-12-06 04:15:32.298076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.951 [2024-12-06 04:15:32.298098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.951 [2024-12-06 04:15:32.298179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.951 [2024-12-06 04:15:32.298203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:44.951 [2024-12-06 04:15:32.298222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.951 [2024-12-06 04:15:32.298244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.951 [2024-12-06 04:15:32.298287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.951 [2024-12-06 04:15:32.298350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:44.951 [2024-12-06 04:15:32.298372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.951 [2024-12-06 04:15:32.298392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.951 [2024-12-06 04:15:32.298507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.951 [2024-12-06 04:15:32.298590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:44.951 [2024-12-06 04:15:32.298600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.951 [2024-12-06 04:15:32.298607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.951 [2024-12-06 04:15:32.298644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.951 [2024-12-06 04:15:32.298654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:44.951 [2024-12-06 04:15:32.298662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.951 [2024-12-06 04:15:32.298669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.951 [2024-12-06 04:15:32.298701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.951 [2024-12-06 04:15:32.298710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:44.951 [2024-12-06 04:15:32.298741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.951 [2024-12-06 04:15:32.298749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.951 [2024-12-06 04:15:32.298789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.951 [2024-12-06 04:15:32.298799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:44.951 [2024-12-06 04:15:32.298807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.951 [2024-12-06 04:15:32.298814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.951 [2024-12-06 04:15:32.298921] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 392.598 ms, result 0 00:24:45.517 00:24:45.517 00:24:45.517 04:15:32 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:47.417 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:47.417 04:15:34 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:47.417 04:15:34 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:47.417 04:15:34 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:47.676 04:15:35 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:47.676 04:15:35 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:47.676 Process with pid 77247 is not found 00:24:47.676 Remove shared memory files 00:24:47.676 04:15:35 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77247 00:24:47.676 04:15:35 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77247 ']' 00:24:47.676 04:15:35 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77247 00:24:47.676 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77247) - No such process 00:24:47.676 04:15:35 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77247 is not found' 00:24:47.676 04:15:35 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:47.676 04:15:35 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:47.676 04:15:35 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:47.676 04:15:35 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:47.676 04:15:35 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:47.676 04:15:35 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:47.676 04:15:35 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:47.676 ************************************ 00:24:47.676 END TEST ftl_restore 00:24:47.676 ************************************ 00:24:47.676 00:24:47.676 real 2m21.644s 00:24:47.676 user 2m11.819s 00:24:47.676 sys 0m11.201s 00:24:47.676 04:15:35 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:47.676 04:15:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:47.676 04:15:35 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:47.676 04:15:35 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:47.676 04:15:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.676 04:15:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:47.676 ************************************ 00:24:47.676 START TEST ftl_dirty_shutdown 00:24:47.676 ************************************ 00:24:47.676 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:47.676 * Looking for test storage... 00:24:47.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:47.676 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:47.676 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:24:47.676 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:47.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.934 --rc genhtml_branch_coverage=1 00:24:47.934 --rc genhtml_function_coverage=1 00:24:47.934 --rc genhtml_legend=1 00:24:47.934 --rc geninfo_all_blocks=1 00:24:47.934 --rc geninfo_unexecuted_blocks=1 00:24:47.934 00:24:47.934 ' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:47.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.934 --rc genhtml_branch_coverage=1 00:24:47.934 --rc genhtml_function_coverage=1 00:24:47.934 --rc genhtml_legend=1 00:24:47.934 --rc geninfo_all_blocks=1 00:24:47.934 --rc geninfo_unexecuted_blocks=1 00:24:47.934 00:24:47.934 ' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:47.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.934 --rc genhtml_branch_coverage=1 00:24:47.934 --rc genhtml_function_coverage=1 00:24:47.934 --rc genhtml_legend=1 00:24:47.934 --rc geninfo_all_blocks=1 00:24:47.934 --rc geninfo_unexecuted_blocks=1 00:24:47.934 00:24:47.934 ' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:47.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.934 --rc genhtml_branch_coverage=1 00:24:47.934 --rc genhtml_function_coverage=1 00:24:47.934 --rc genhtml_legend=1 00:24:47.934 --rc geninfo_all_blocks=1 00:24:47.934 --rc geninfo_unexecuted_blocks=1 00:24:47.934 00:24:47.934 ' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:47.934 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78788 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78788 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 78788 ']' 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.935 04:15:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:47.935 [2024-12-06 04:15:35.309170] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:24:47.935 [2024-12-06 04:15:35.309463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78788 ] 00:24:48.192 [2024-12-06 04:15:35.469629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.192 [2024-12-06 04:15:35.565199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.757 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.757 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:48.757 04:15:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:48.757 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:48.757 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:48.757 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:48.757 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:48.757 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:49.018 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:49.018 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:49.018 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:49.018 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:49.018 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:49.018 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:49.018 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:49.018 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:49.276 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:49.276 { 00:24:49.276 "name": "nvme0n1", 00:24:49.276 "aliases": [ 00:24:49.276 "c50df382-9a3f-49a1-9eaf-4d571f106e2a" 00:24:49.276 ], 00:24:49.276 "product_name": "NVMe disk", 00:24:49.276 "block_size": 4096, 00:24:49.276 "num_blocks": 1310720, 00:24:49.276 "uuid": "c50df382-9a3f-49a1-9eaf-4d571f106e2a", 00:24:49.276 "numa_id": -1, 00:24:49.276 "assigned_rate_limits": { 00:24:49.276 "rw_ios_per_sec": 0, 00:24:49.276 "rw_mbytes_per_sec": 0, 00:24:49.276 "r_mbytes_per_sec": 0, 00:24:49.276 "w_mbytes_per_sec": 0 00:24:49.276 }, 00:24:49.276 "claimed": true, 00:24:49.276 "claim_type": "read_many_write_one", 00:24:49.276 "zoned": false, 00:24:49.276 "supported_io_types": { 00:24:49.276 "read": true, 00:24:49.276 "write": true, 00:24:49.276 "unmap": true, 00:24:49.276 "flush": true, 00:24:49.276 "reset": true, 00:24:49.276 "nvme_admin": true, 00:24:49.276 "nvme_io": true, 00:24:49.276 "nvme_io_md": false, 00:24:49.276 "write_zeroes": true, 00:24:49.276 "zcopy": false, 00:24:49.276 "get_zone_info": false, 00:24:49.276 "zone_management": false, 00:24:49.276 "zone_append": false, 00:24:49.276 "compare": true, 00:24:49.276 "compare_and_write": false, 00:24:49.276 "abort": true, 00:24:49.276 "seek_hole": false, 00:24:49.276 "seek_data": false, 00:24:49.276 "copy": true, 00:24:49.276 "nvme_iov_md": false 00:24:49.276 }, 00:24:49.276 "driver_specific": { 00:24:49.276 "nvme": [ 00:24:49.276 { 00:24:49.276 "pci_address": "0000:00:11.0", 00:24:49.276 "trid": { 00:24:49.276 "trtype": "PCIe", 00:24:49.276 "traddr": "0000:00:11.0" 00:24:49.276 }, 00:24:49.276 "ctrlr_data": { 00:24:49.276 "cntlid": 0, 00:24:49.276 "vendor_id": "0x1b36", 00:24:49.276 "model_number": "QEMU NVMe Ctrl", 00:24:49.276 "serial_number": "12341", 00:24:49.276 "firmware_revision": "8.0.0", 00:24:49.276 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:49.276 "oacs": { 00:24:49.276 "security": 0, 00:24:49.276 "format": 1, 00:24:49.276 "firmware": 0, 00:24:49.276 "ns_manage": 1 00:24:49.276 }, 00:24:49.276 "multi_ctrlr": false, 00:24:49.276 "ana_reporting": false 00:24:49.276 }, 00:24:49.276 "vs": { 00:24:49.276 "nvme_version": "1.4" 00:24:49.276 }, 00:24:49.276 "ns_data": { 00:24:49.276 "id": 1, 00:24:49.276 "can_share": false 00:24:49.276 } 00:24:49.276 } 00:24:49.276 ], 00:24:49.276 "mp_policy": "active_passive" 00:24:49.276 } 00:24:49.276 } 00:24:49.276 ]' 00:24:49.276 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:49.276 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:49.276 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:49.276 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:49.276 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:49.276 04:15:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:49.276 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:49.276 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:49.276 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:49.276 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:49.276 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:49.535 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=3eb5675c-060f-4f35-a711-83f472a187a4 00:24:49.535 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:49.535 04:15:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3eb5675c-060f-4f35-a711-83f472a187a4 00:24:49.794 04:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:49.794 04:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a8e031a2-bb06-44a6-bfda-e5cb4cb2dff9 00:24:49.794 04:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a8e031a2-bb06-44a6-bfda-e5cb4cb2dff9 00:24:50.052 04:15:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=bcc38c12-6d10-4019-891e-b5d105c452a1 00:24:50.053 04:15:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:50.053 04:15:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bcc38c12-6d10-4019-891e-b5d105c452a1 00:24:50.053 04:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:50.053 04:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:50.053 04:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=bcc38c12-6d10-4019-891e-b5d105c452a1 00:24:50.053 04:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:50.053 04:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size bcc38c12-6d10-4019-891e-b5d105c452a1 00:24:50.053 04:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bcc38c12-6d10-4019-891e-b5d105c452a1 00:24:50.053 04:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:50.053 04:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:50.053 04:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:50.053 04:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bcc38c12-6d10-4019-891e-b5d105c452a1 00:24:50.312 04:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:50.312 { 00:24:50.312 "name": "bcc38c12-6d10-4019-891e-b5d105c452a1", 00:24:50.312 "aliases": [ 00:24:50.312 "lvs/nvme0n1p0" 00:24:50.312 ], 00:24:50.312 "product_name": "Logical Volume", 00:24:50.312 "block_size": 4096, 00:24:50.312 "num_blocks": 26476544, 00:24:50.312 "uuid": "bcc38c12-6d10-4019-891e-b5d105c452a1", 00:24:50.312 "assigned_rate_limits": { 00:24:50.312 "rw_ios_per_sec": 0, 00:24:50.312 "rw_mbytes_per_sec": 0, 00:24:50.312 "r_mbytes_per_sec": 0, 00:24:50.312 "w_mbytes_per_sec": 0 00:24:50.312 }, 00:24:50.312 "claimed": false, 00:24:50.312 "zoned": false, 00:24:50.312 "supported_io_types": { 00:24:50.312 "read": true, 00:24:50.312 "write": true, 00:24:50.312 "unmap": true, 00:24:50.312 "flush": false, 00:24:50.312 "reset": true, 00:24:50.312 "nvme_admin": false, 00:24:50.312 "nvme_io": false, 00:24:50.312 "nvme_io_md": false, 00:24:50.312 "write_zeroes": true, 00:24:50.312 "zcopy": false, 00:24:50.312 "get_zone_info": false, 00:24:50.312 "zone_management": false, 00:24:50.312 "zone_append": false, 00:24:50.312 "compare": false, 00:24:50.312 "compare_and_write": false, 00:24:50.312 "abort": false, 00:24:50.312 "seek_hole": true, 00:24:50.312 "seek_data": true, 00:24:50.312 "copy": false, 00:24:50.312 "nvme_iov_md": false 00:24:50.312 }, 00:24:50.312 "driver_specific": { 00:24:50.312 "lvol": { 00:24:50.312 "lvol_store_uuid": "a8e031a2-bb06-44a6-bfda-e5cb4cb2dff9", 00:24:50.312 "base_bdev": "nvme0n1", 00:24:50.312 "thin_provision": true, 00:24:50.312 "num_allocated_clusters": 0, 00:24:50.312 "snapshot": false, 00:24:50.312 "clone": false, 00:24:50.312 "esnap_clone": false 00:24:50.312 } 00:24:50.312 } 00:24:50.312 } 00:24:50.312 ]' 00:24:50.312 04:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:50.312 04:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:50.312 04:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:50.312 04:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:50.312 04:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:50.312 04:15:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:50.312 04:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:50.312 04:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:50.312 04:15:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:50.571 04:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:50.571 04:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:50.571 04:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size bcc38c12-6d10-4019-891e-b5d105c452a1 00:24:50.571 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bcc38c12-6d10-4019-891e-b5d105c452a1 00:24:50.571 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:50.571 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:50.571 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:50.571 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bcc38c12-6d10-4019-891e-b5d105c452a1 00:24:50.830 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:50.830 { 00:24:50.830 "name": "bcc38c12-6d10-4019-891e-b5d105c452a1", 00:24:50.830 "aliases": [ 00:24:50.830 "lvs/nvme0n1p0" 00:24:50.830 ], 00:24:50.830 "product_name": "Logical Volume", 00:24:50.830 "block_size": 4096, 00:24:50.830 "num_blocks": 26476544, 00:24:50.830 "uuid": "bcc38c12-6d10-4019-891e-b5d105c452a1", 00:24:50.830 "assigned_rate_limits": { 00:24:50.830 "rw_ios_per_sec": 0, 00:24:50.830 "rw_mbytes_per_sec": 0, 00:24:50.830 "r_mbytes_per_sec": 0, 00:24:50.830 "w_mbytes_per_sec": 0 00:24:50.830 }, 00:24:50.830 "claimed": false, 00:24:50.830 "zoned": false, 00:24:50.830 "supported_io_types": { 00:24:50.830 "read": true, 00:24:50.830 "write": true, 00:24:50.830 "unmap": true, 00:24:50.830 "flush": false, 00:24:50.830 "reset": true, 00:24:50.830 "nvme_admin": false, 00:24:50.830 "nvme_io": false, 00:24:50.830 "nvme_io_md": false, 00:24:50.830 "write_zeroes": true, 00:24:50.830 "zcopy": false, 00:24:50.830 "get_zone_info": false, 00:24:50.830 "zone_management": false, 00:24:50.830 "zone_append": false, 00:24:50.830 "compare": false, 00:24:50.830 "compare_and_write": false, 00:24:50.830 "abort": false, 00:24:50.830 "seek_hole": true, 00:24:50.830 "seek_data": true, 00:24:50.830 "copy": false, 00:24:50.830 "nvme_iov_md": false 00:24:50.830 }, 00:24:50.830 "driver_specific": { 00:24:50.830 "lvol": { 00:24:50.830 "lvol_store_uuid": "a8e031a2-bb06-44a6-bfda-e5cb4cb2dff9", 00:24:50.830 "base_bdev": "nvme0n1", 00:24:50.830 "thin_provision": true, 00:24:50.830 "num_allocated_clusters": 0, 00:24:50.830 "snapshot": false, 00:24:50.830 "clone": false, 00:24:50.830 "esnap_clone": false 00:24:50.830 } 00:24:50.830 } 00:24:50.830 } 00:24:50.830 ]' 00:24:50.830 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:50.830 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:50.830 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:50.830 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:50.830 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:50.830 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:50.830 04:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:50.830 04:15:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:51.089 04:15:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:51.089 04:15:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size bcc38c12-6d10-4019-891e-b5d105c452a1 00:24:51.089 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bcc38c12-6d10-4019-891e-b5d105c452a1 00:24:51.089 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:51.089 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:51.089 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:51.089 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bcc38c12-6d10-4019-891e-b5d105c452a1 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:51.348 { 00:24:51.348 "name": "bcc38c12-6d10-4019-891e-b5d105c452a1", 00:24:51.348 "aliases": [ 00:24:51.348 "lvs/nvme0n1p0" 00:24:51.348 ], 00:24:51.348 "product_name": "Logical Volume", 00:24:51.348 "block_size": 4096, 00:24:51.348 "num_blocks": 26476544, 00:24:51.348 "uuid": "bcc38c12-6d10-4019-891e-b5d105c452a1", 00:24:51.348 "assigned_rate_limits": { 00:24:51.348 "rw_ios_per_sec": 0, 00:24:51.348 "rw_mbytes_per_sec": 0, 00:24:51.348 "r_mbytes_per_sec": 0, 00:24:51.348 "w_mbytes_per_sec": 0 00:24:51.348 }, 00:24:51.348 "claimed": false, 00:24:51.348 "zoned": false, 00:24:51.348 "supported_io_types": { 00:24:51.348 "read": true, 00:24:51.348 "write": true, 00:24:51.348 "unmap": true, 00:24:51.348 "flush": false, 00:24:51.348 "reset": true, 00:24:51.348 "nvme_admin": false, 00:24:51.348 "nvme_io": false, 00:24:51.348 "nvme_io_md": false, 00:24:51.348 "write_zeroes": true, 00:24:51.348 "zcopy": false, 00:24:51.348 "get_zone_info": false, 00:24:51.348 "zone_management": false, 00:24:51.348 "zone_append": false, 00:24:51.348 "compare": false, 00:24:51.348 "compare_and_write": false, 00:24:51.348 "abort": false, 00:24:51.348 "seek_hole": true, 00:24:51.348 "seek_data": true, 00:24:51.348 "copy": false, 00:24:51.348 "nvme_iov_md": false 00:24:51.348 }, 00:24:51.348 "driver_specific": { 00:24:51.348 "lvol": { 00:24:51.348 "lvol_store_uuid": "a8e031a2-bb06-44a6-bfda-e5cb4cb2dff9", 00:24:51.348 "base_bdev": "nvme0n1", 00:24:51.348 "thin_provision": true, 00:24:51.348 "num_allocated_clusters": 0, 00:24:51.348 "snapshot": false, 00:24:51.348 "clone": false, 00:24:51.348 "esnap_clone": false 00:24:51.348 } 00:24:51.348 } 00:24:51.348 } 00:24:51.348 ]' 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d bcc38c12-6d10-4019-891e-b5d105c452a1 --l2p_dram_limit 10' 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:51.348 04:15:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bcc38c12-6d10-4019-891e-b5d105c452a1 --l2p_dram_limit 10 -c nvc0n1p0 00:24:51.608 [2024-12-06 04:15:38.966852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.608 [2024-12-06 04:15:38.967037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:51.608 [2024-12-06 04:15:38.967057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:51.608 [2024-12-06 04:15:38.967064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.608 [2024-12-06 04:15:38.967117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.608 [2024-12-06 04:15:38.967125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:51.608 [2024-12-06 04:15:38.967133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:51.608 [2024-12-06 04:15:38.967139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.608 [2024-12-06 04:15:38.967158] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:51.608 [2024-12-06 04:15:38.967773] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:51.608 [2024-12-06 04:15:38.967789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.608 [2024-12-06 04:15:38.967796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:51.608 [2024-12-06 04:15:38.967803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:24:51.608 [2024-12-06 04:15:38.967809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.608 [2024-12-06 04:15:38.967859] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9cb7737c-45c5-4972-932b-0d23e0036544 00:24:51.608 [2024-12-06 04:15:38.968791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.608 [2024-12-06 04:15:38.968813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:51.608 [2024-12-06 04:15:38.968821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:51.608 [2024-12-06 04:15:38.968828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.608 [2024-12-06 04:15:38.973464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.608 [2024-12-06 04:15:38.973497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:51.608 [2024-12-06 04:15:38.973504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.604 ms 00:24:51.608 [2024-12-06 04:15:38.973511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.608 [2024-12-06 04:15:38.973579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.608 [2024-12-06 04:15:38.973587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:51.608 [2024-12-06 04:15:38.973593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:51.608 [2024-12-06 04:15:38.973603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.608 [2024-12-06 04:15:38.973636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.608 [2024-12-06 04:15:38.973644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:51.608 [2024-12-06 04:15:38.973652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:51.608 [2024-12-06 04:15:38.973659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.608 [2024-12-06 04:15:38.973675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:51.608 [2024-12-06 04:15:38.976546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.608 [2024-12-06 04:15:38.976571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:51.608 [2024-12-06 04:15:38.976579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.873 ms 00:24:51.608 [2024-12-06 04:15:38.976585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.608 [2024-12-06 04:15:38.976619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.608 [2024-12-06 04:15:38.976626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:51.608 [2024-12-06 04:15:38.976633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:51.608 [2024-12-06 04:15:38.976639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.608 [2024-12-06 04:15:38.976657] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:51.608 [2024-12-06 04:15:38.976773] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:51.608 [2024-12-06 04:15:38.976785] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:51.608 [2024-12-06 04:15:38.976794] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:51.608 [2024-12-06 04:15:38.976804] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:51.608 [2024-12-06 04:15:38.976810] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:51.608 [2024-12-06 04:15:38.976817] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:51.608 [2024-12-06 04:15:38.976823] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:51.608 [2024-12-06 04:15:38.976832] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:51.608 [2024-12-06 04:15:38.976838] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:51.608 [2024-12-06 04:15:38.976844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.608 [2024-12-06 04:15:38.976855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:51.608 [2024-12-06 04:15:38.976862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:24:51.609 [2024-12-06 04:15:38.976867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.609 [2024-12-06 04:15:38.976933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.609 [2024-12-06 04:15:38.976940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:51.609 [2024-12-06 04:15:38.976947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:51.609 [2024-12-06 04:15:38.976952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.609 [2024-12-06 04:15:38.977029] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:51.609 [2024-12-06 04:15:38.977036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:51.609 [2024-12-06 04:15:38.977044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.609 [2024-12-06 04:15:38.977050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:51.609 [2024-12-06 04:15:38.977062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:51.609 [2024-12-06 04:15:38.977073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:51.609 [2024-12-06 04:15:38.977081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.609 [2024-12-06 04:15:38.977092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:51.609 [2024-12-06 04:15:38.977097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:51.609 [2024-12-06 04:15:38.977103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.609 [2024-12-06 04:15:38.977108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:51.609 [2024-12-06 04:15:38.977114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:51.609 [2024-12-06 04:15:38.977120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:51.609 [2024-12-06 04:15:38.977134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:51.609 [2024-12-06 04:15:38.977140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:51.609 [2024-12-06 04:15:38.977151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.609 [2024-12-06 04:15:38.977162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:51.609 [2024-12-06 04:15:38.977167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.609 [2024-12-06 04:15:38.977178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:51.609 [2024-12-06 04:15:38.977184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.609 [2024-12-06 04:15:38.977195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:51.609 [2024-12-06 04:15:38.977200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.609 [2024-12-06 04:15:38.977211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:51.609 [2024-12-06 04:15:38.977218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.609 [2024-12-06 04:15:38.977231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:51.609 [2024-12-06 04:15:38.977236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:51.609 [2024-12-06 04:15:38.977242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.609 [2024-12-06 04:15:38.977247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:51.609 [2024-12-06 04:15:38.977253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:51.609 [2024-12-06 04:15:38.977258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:51.609 [2024-12-06 04:15:38.977268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:51.609 [2024-12-06 04:15:38.977275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977279] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:51.609 [2024-12-06 04:15:38.977286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:51.609 [2024-12-06 04:15:38.977292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.609 [2024-12-06 04:15:38.977299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.609 [2024-12-06 04:15:38.977306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:51.609 [2024-12-06 04:15:38.977314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:51.609 [2024-12-06 04:15:38.977318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:51.609 [2024-12-06 04:15:38.977325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:51.609 [2024-12-06 04:15:38.977330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:51.609 [2024-12-06 04:15:38.977337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:51.609 [2024-12-06 04:15:38.977344] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:51.609 [2024-12-06 04:15:38.977353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.609 [2024-12-06 04:15:38.977359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:51.609 [2024-12-06 04:15:38.977366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:51.609 [2024-12-06 04:15:38.977371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:51.609 [2024-12-06 04:15:38.977378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:51.609 [2024-12-06 04:15:38.977383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:51.609 [2024-12-06 04:15:38.977391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:51.609 [2024-12-06 04:15:38.977396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:51.609 [2024-12-06 04:15:38.977403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:51.609 [2024-12-06 04:15:38.977408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:51.609 [2024-12-06 04:15:38.977416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:51.609 [2024-12-06 04:15:38.977421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:51.609 [2024-12-06 04:15:38.977428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:51.609 [2024-12-06 04:15:38.977433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:51.609 [2024-12-06 04:15:38.977440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:51.609 [2024-12-06 04:15:38.977445] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:51.609 [2024-12-06 04:15:38.977452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.609 [2024-12-06 04:15:38.977458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:51.609 [2024-12-06 04:15:38.977464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:51.609 [2024-12-06 04:15:38.977470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:51.609 [2024-12-06 04:15:38.977477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:51.609 [2024-12-06 04:15:38.977483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.609 [2024-12-06 04:15:38.977489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:51.609 [2024-12-06 04:15:38.977496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:24:51.609 [2024-12-06 04:15:38.977502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.609 [2024-12-06 04:15:38.977530] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:51.609 [2024-12-06 04:15:38.977540] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:54.142 [2024-12-06 04:15:41.043097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.043158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:54.142 [2024-12-06 04:15:41.043173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2065.557 ms 00:24:54.142 [2024-12-06 04:15:41.043183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.068127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.068171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:54.142 [2024-12-06 04:15:41.068183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.757 ms 00:24:54.142 [2024-12-06 04:15:41.068193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.068317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.068329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:54.142 [2024-12-06 04:15:41.068338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:54.142 [2024-12-06 04:15:41.068351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.098518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.098556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:54.142 [2024-12-06 04:15:41.098567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.135 ms 00:24:54.142 [2024-12-06 04:15:41.098576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.098606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.098619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:54.142 [2024-12-06 04:15:41.098627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:54.142 [2024-12-06 04:15:41.098643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.098990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.099008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:54.142 [2024-12-06 04:15:41.099017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:24:54.142 [2024-12-06 04:15:41.099027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.099126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.099136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:54.142 [2024-12-06 04:15:41.099146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:24:54.142 [2024-12-06 04:15:41.099156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.112746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.112780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:54.142 [2024-12-06 04:15:41.112790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.573 ms 00:24:54.142 [2024-12-06 04:15:41.112799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.136789] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:54.142 [2024-12-06 04:15:41.139811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.139978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:54.142 [2024-12-06 04:15:41.140005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.931 ms 00:24:54.142 [2024-12-06 04:15:41.140016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.199513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.199689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:54.142 [2024-12-06 04:15:41.199712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.451 ms 00:24:54.142 [2024-12-06 04:15:41.199737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.199912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.199925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:54.142 [2024-12-06 04:15:41.199937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:24:54.142 [2024-12-06 04:15:41.199945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.223248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.223282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:54.142 [2024-12-06 04:15:41.223295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.270 ms 00:24:54.142 [2024-12-06 04:15:41.223303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.245592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.245623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:54.142 [2024-12-06 04:15:41.245636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.250 ms 00:24:54.142 [2024-12-06 04:15:41.245643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.246230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.246250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:54.142 [2024-12-06 04:15:41.246261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:24:54.142 [2024-12-06 04:15:41.246271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.314426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.314469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:54.142 [2024-12-06 04:15:41.314484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.122 ms 00:24:54.142 [2024-12-06 04:15:41.314492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.338147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.338181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:54.142 [2024-12-06 04:15:41.338194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.575 ms 00:24:54.142 [2024-12-06 04:15:41.338202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.361406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.361564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:54.142 [2024-12-06 04:15:41.361584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.165 ms 00:24:54.142 [2024-12-06 04:15:41.361592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.384185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.384305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:54.142 [2024-12-06 04:15:41.384324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.557 ms 00:24:54.142 [2024-12-06 04:15:41.384332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.384373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.142 [2024-12-06 04:15:41.384383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:54.142 [2024-12-06 04:15:41.384395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:54.142 [2024-12-06 04:15:41.384402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.142 [2024-12-06 04:15:41.384475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.143 [2024-12-06 04:15:41.384487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:54.143 [2024-12-06 04:15:41.384496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:54.143 [2024-12-06 04:15:41.384503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.143 [2024-12-06 04:15:41.385325] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2418.076 ms, result 0 00:24:54.143 { 00:24:54.143 "name": "ftl0", 00:24:54.143 "uuid": "9cb7737c-45c5-4972-932b-0d23e0036544" 00:24:54.143 } 00:24:54.143 04:15:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:54.143 04:15:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:54.143 04:15:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:54.143 04:15:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:54.143 04:15:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:54.402 /dev/nbd0 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:54.402 1+0 records in 00:24:54.402 1+0 records out 00:24:54.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168603 s, 24.3 MB/s 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:54.402 04:15:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:54.402 [2024-12-06 04:15:41.898830] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:24:54.402 [2024-12-06 04:15:41.898939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78913 ] 00:24:54.660 [2024-12-06 04:15:42.057739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.661 [2024-12-06 04:15:42.152937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.039  [2024-12-06T04:15:44.503Z] Copying: 196/1024 [MB] (196 MBps) [2024-12-06T04:15:45.438Z] Copying: 418/1024 [MB] (222 MBps) [2024-12-06T04:15:46.373Z] Copying: 681/1024 [MB] (263 MBps) [2024-12-06T04:15:46.939Z] Copying: 940/1024 [MB] (258 MBps) [2024-12-06T04:15:47.507Z] Copying: 1024/1024 [MB] (average 236 MBps) 00:24:59.980 00:24:59.980 04:15:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:01.882 04:15:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:02.139 [2024-12-06 04:15:49.423566] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:25:02.139 [2024-12-06 04:15:49.423686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78990 ] 00:25:02.139 [2024-12-06 04:15:49.579654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.139 [2024-12-06 04:15:49.656535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.511  [2024-12-06T04:15:52.006Z] Copying: 30/1024 [MB] (30 MBps) [2024-12-06T04:15:52.964Z] Copying: 58/1024 [MB] (28 MBps) [2024-12-06T04:15:53.899Z] Copying: 89/1024 [MB] (30 MBps) [2024-12-06T04:15:54.833Z] Copying: 121/1024 [MB] (31 MBps) [2024-12-06T04:15:56.207Z] Copying: 150/1024 [MB] (29 MBps) [2024-12-06T04:15:57.141Z] Copying: 180/1024 [MB] (29 MBps) [2024-12-06T04:15:58.072Z] Copying: 211/1024 [MB] (30 MBps) [2024-12-06T04:15:59.005Z] Copying: 245/1024 [MB] (33 MBps) [2024-12-06T04:15:59.940Z] Copying: 275/1024 [MB] (29 MBps) [2024-12-06T04:16:00.875Z] Copying: 309/1024 [MB] (34 MBps) [2024-12-06T04:16:02.250Z] Copying: 342/1024 [MB] (33 MBps) [2024-12-06T04:16:03.185Z] Copying: 371/1024 [MB] (29 MBps) [2024-12-06T04:16:04.118Z] Copying: 401/1024 [MB] (30 MBps) [2024-12-06T04:16:05.067Z] Copying: 431/1024 [MB] (30 MBps) [2024-12-06T04:16:06.026Z] Copying: 461/1024 [MB] (29 MBps) [2024-12-06T04:16:06.959Z] Copying: 490/1024 [MB] (29 MBps) [2024-12-06T04:16:07.895Z] Copying: 520/1024 [MB] (30 MBps) [2024-12-06T04:16:08.830Z] Copying: 554/1024 [MB] (33 MBps) [2024-12-06T04:16:10.205Z] Copying: 588/1024 [MB] (33 MBps) [2024-12-06T04:16:11.140Z] Copying: 624/1024 [MB] (35 MBps) [2024-12-06T04:16:12.076Z] Copying: 659/1024 [MB] (34 MBps) [2024-12-06T04:16:13.012Z] Copying: 691/1024 [MB] (32 MBps) [2024-12-06T04:16:13.948Z] Copying: 721/1024 [MB] (29 MBps) [2024-12-06T04:16:14.883Z] Copying: 749/1024 [MB] (28 MBps) [2024-12-06T04:16:16.255Z] Copying: 781/1024 [MB] (31 MBps) [2024-12-06T04:16:17.187Z] Copying: 816/1024 [MB] (34 MBps) [2024-12-06T04:16:18.120Z] Copying: 847/1024 [MB] (31 MBps) [2024-12-06T04:16:19.052Z] Copying: 879/1024 [MB] (31 MBps) [2024-12-06T04:16:20.056Z] Copying: 911/1024 [MB] (32 MBps) [2024-12-06T04:16:20.990Z] Copying: 941/1024 [MB] (29 MBps) [2024-12-06T04:16:21.925Z] Copying: 972/1024 [MB] (31 MBps) [2024-12-06T04:16:22.861Z] Copying: 1001/1024 [MB] (29 MBps) [2024-12-06T04:16:23.428Z] Copying: 1024/1024 [MB] (average 31 MBps) 00:25:35.901 00:25:35.901 04:16:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:35.901 04:16:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:35.901 04:16:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:36.161 [2024-12-06 04:16:23.564745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.161 [2024-12-06 04:16:23.564793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:36.161 [2024-12-06 04:16:23.564805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:36.161 [2024-12-06 04:16:23.564813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.161 [2024-12-06 04:16:23.564835] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:36.161 [2024-12-06 04:16:23.567010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.161 [2024-12-06 04:16:23.567140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:36.161 [2024-12-06 04:16:23.567157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.159 ms 00:25:36.161 [2024-12-06 04:16:23.567164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.161 [2024-12-06 04:16:23.568891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.161 [2024-12-06 04:16:23.568917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:36.161 [2024-12-06 04:16:23.568927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:25:36.162 [2024-12-06 04:16:23.568933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.162 [2024-12-06 04:16:23.581068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.162 [2024-12-06 04:16:23.581097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:36.162 [2024-12-06 04:16:23.581108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.116 ms 00:25:36.162 [2024-12-06 04:16:23.581114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.162 [2024-12-06 04:16:23.585845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.162 [2024-12-06 04:16:23.585953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:36.162 [2024-12-06 04:16:23.585969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.702 ms 00:25:36.162 [2024-12-06 04:16:23.585975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.162 [2024-12-06 04:16:23.603827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.162 [2024-12-06 04:16:23.603858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:36.162 [2024-12-06 04:16:23.603868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.795 ms 00:25:36.162 [2024-12-06 04:16:23.603875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.162 [2024-12-06 04:16:23.616348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.162 [2024-12-06 04:16:23.616376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:36.162 [2024-12-06 04:16:23.616390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.440 ms 00:25:36.162 [2024-12-06 04:16:23.616396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.162 [2024-12-06 04:16:23.616505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.162 [2024-12-06 04:16:23.616513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:36.162 [2024-12-06 04:16:23.616521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:36.162 [2024-12-06 04:16:23.616528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.162 [2024-12-06 04:16:23.634432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.162 [2024-12-06 04:16:23.634472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:36.162 [2024-12-06 04:16:23.634483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.891 ms 00:25:36.162 [2024-12-06 04:16:23.634489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.162 [2024-12-06 04:16:23.651690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.162 [2024-12-06 04:16:23.651731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:36.162 [2024-12-06 04:16:23.651741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.173 ms 00:25:36.162 [2024-12-06 04:16:23.651747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.162 [2024-12-06 04:16:23.669086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.162 [2024-12-06 04:16:23.669116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:36.162 [2024-12-06 04:16:23.669126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.308 ms 00:25:36.162 [2024-12-06 04:16:23.669131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.162 [2024-12-06 04:16:23.686206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.162 [2024-12-06 04:16:23.686235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:36.162 [2024-12-06 04:16:23.686246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.010 ms 00:25:36.162 [2024-12-06 04:16:23.686252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.162 [2024-12-06 04:16:23.686281] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:36.162 [2024-12-06 04:16:23.686292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:36.162 [2024-12-06 04:16:23.686649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.686998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:36.163 [2024-12-06 04:16:23.687010] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:36.163 [2024-12-06 04:16:23.687017] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9cb7737c-45c5-4972-932b-0d23e0036544 00:25:36.163 [2024-12-06 04:16:23.687023] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:36.163 [2024-12-06 04:16:23.687032] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:36.163 [2024-12-06 04:16:23.687039] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:36.163 [2024-12-06 04:16:23.687046] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:36.163 [2024-12-06 04:16:23.687052] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:36.163 [2024-12-06 04:16:23.687059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:36.163 [2024-12-06 04:16:23.687064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:36.163 [2024-12-06 04:16:23.687070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:36.163 [2024-12-06 04:16:23.687075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:36.163 [2024-12-06 04:16:23.687082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.163 [2024-12-06 04:16:23.687088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:36.163 [2024-12-06 04:16:23.687096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:25:36.163 [2024-12-06 04:16:23.687101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.422 [2024-12-06 04:16:23.696500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.422 [2024-12-06 04:16:23.696528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:36.422 [2024-12-06 04:16:23.696537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.373 ms 00:25:36.422 [2024-12-06 04:16:23.696543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.422 [2024-12-06 04:16:23.696830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.422 [2024-12-06 04:16:23.696841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:36.422 [2024-12-06 04:16:23.696849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:25:36.422 [2024-12-06 04:16:23.696854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.422 [2024-12-06 04:16:23.729562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:36.422 [2024-12-06 04:16:23.729606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:36.422 [2024-12-06 04:16:23.729616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:36.422 [2024-12-06 04:16:23.729622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.422 [2024-12-06 04:16:23.729677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:36.423 [2024-12-06 04:16:23.729684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:36.423 [2024-12-06 04:16:23.729691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:36.423 [2024-12-06 04:16:23.729697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.423 [2024-12-06 04:16:23.729822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:36.423 [2024-12-06 04:16:23.729833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:36.423 [2024-12-06 04:16:23.729841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:36.423 [2024-12-06 04:16:23.729847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.423 [2024-12-06 04:16:23.729863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:36.423 [2024-12-06 04:16:23.729869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:36.423 [2024-12-06 04:16:23.729877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:36.423 [2024-12-06 04:16:23.729883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.423 [2024-12-06 04:16:23.788844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:36.423 [2024-12-06 04:16:23.788890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:36.423 [2024-12-06 04:16:23.788900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:36.423 [2024-12-06 04:16:23.788906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.423 [2024-12-06 04:16:23.837105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:36.423 [2024-12-06 04:16:23.837149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:36.423 [2024-12-06 04:16:23.837159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:36.423 [2024-12-06 04:16:23.837166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.423 [2024-12-06 04:16:23.837229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:36.423 [2024-12-06 04:16:23.837237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:36.423 [2024-12-06 04:16:23.837246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:36.423 [2024-12-06 04:16:23.837252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.423 [2024-12-06 04:16:23.837302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:36.423 [2024-12-06 04:16:23.837309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:36.423 [2024-12-06 04:16:23.837316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:36.423 [2024-12-06 04:16:23.837322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.423 [2024-12-06 04:16:23.837394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:36.423 [2024-12-06 04:16:23.837401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:36.423 [2024-12-06 04:16:23.837408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:36.423 [2024-12-06 04:16:23.837416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.423 [2024-12-06 04:16:23.837443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:36.423 [2024-12-06 04:16:23.837450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:36.423 [2024-12-06 04:16:23.837457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:36.423 [2024-12-06 04:16:23.837463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.423 [2024-12-06 04:16:23.837493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:36.423 [2024-12-06 04:16:23.837499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:36.423 [2024-12-06 04:16:23.837507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:36.423 [2024-12-06 04:16:23.837515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.423 [2024-12-06 04:16:23.837552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:36.423 [2024-12-06 04:16:23.837559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:36.423 [2024-12-06 04:16:23.837567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:36.423 [2024-12-06 04:16:23.837573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.423 [2024-12-06 04:16:23.837674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 272.924 ms, result 0 00:25:36.423 true 00:25:36.423 04:16:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78788 00:25:36.423 04:16:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78788 00:25:36.423 04:16:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:36.423 [2024-12-06 04:16:23.922160] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:25:36.423 [2024-12-06 04:16:23.922413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79354 ] 00:25:36.681 [2024-12-06 04:16:24.072136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.682 [2024-12-06 04:16:24.153336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.059  [2024-12-06T04:16:26.521Z] Copying: 259/1024 [MB] (259 MBps) [2024-12-06T04:16:27.455Z] Copying: 522/1024 [MB] (262 MBps) [2024-12-06T04:16:28.391Z] Copying: 783/1024 [MB] (261 MBps) [2024-12-06T04:16:28.958Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:25:41.431 00:25:41.431 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78788 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:41.431 04:16:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:41.431 [2024-12-06 04:16:28.898974] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:25:41.432 [2024-12-06 04:16:28.899086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79411 ] 00:25:41.690 [2024-12-06 04:16:29.054188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.691 [2024-12-06 04:16:29.132993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.950 [2024-12-06 04:16:29.344208] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:41.950 [2024-12-06 04:16:29.344414] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:41.950 [2024-12-06 04:16:29.406818] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:41.950 [2024-12-06 04:16:29.407316] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:41.950 [2024-12-06 04:16:29.407593] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:42.209 [2024-12-06 04:16:29.597473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.209 [2024-12-06 04:16:29.597696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:42.209 [2024-12-06 04:16:29.597786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:42.209 [2024-12-06 04:16:29.597821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.209 [2024-12-06 04:16:29.597890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.209 [2024-12-06 04:16:29.598023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:42.209 [2024-12-06 04:16:29.598075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:42.209 [2024-12-06 04:16:29.598094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.209 [2024-12-06 04:16:29.598125] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:42.209 [2024-12-06 04:16:29.598869] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:42.209 [2024-12-06 04:16:29.598968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.209 [2024-12-06 04:16:29.599014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:42.209 [2024-12-06 04:16:29.599038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:25:42.209 [2024-12-06 04:16:29.599057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.209 [2024-12-06 04:16:29.600083] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:42.209 [2024-12-06 04:16:29.612282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.209 [2024-12-06 04:16:29.612403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:42.209 [2024-12-06 04:16:29.612453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.201 ms 00:25:42.209 [2024-12-06 04:16:29.612475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.209 [2024-12-06 04:16:29.612572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.209 [2024-12-06 04:16:29.612988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:42.209 [2024-12-06 04:16:29.613034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:42.209 [2024-12-06 04:16:29.613091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.209 [2024-12-06 04:16:29.617707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.209 [2024-12-06 04:16:29.617820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:42.209 [2024-12-06 04:16:29.617869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.482 ms 00:25:42.209 [2024-12-06 04:16:29.617891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.209 [2024-12-06 04:16:29.617973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.209 [2024-12-06 04:16:29.617996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:42.209 [2024-12-06 04:16:29.618015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:42.209 [2024-12-06 04:16:29.618057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.209 [2024-12-06 04:16:29.618118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.209 [2024-12-06 04:16:29.618252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:42.209 [2024-12-06 04:16:29.618276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:42.209 [2024-12-06 04:16:29.618294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.209 [2024-12-06 04:16:29.618331] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:42.209 [2024-12-06 04:16:29.621784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.209 [2024-12-06 04:16:29.621878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:42.209 [2024-12-06 04:16:29.621928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.461 ms 00:25:42.209 [2024-12-06 04:16:29.621949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.209 [2024-12-06 04:16:29.621998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.209 [2024-12-06 04:16:29.622128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:42.209 [2024-12-06 04:16:29.622151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:42.209 [2024-12-06 04:16:29.622169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.210 [2024-12-06 04:16:29.622215] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:42.210 [2024-12-06 04:16:29.622283] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:42.210 [2024-12-06 04:16:29.622341] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:42.210 [2024-12-06 04:16:29.622378] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:42.210 [2024-12-06 04:16:29.622544] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:42.210 [2024-12-06 04:16:29.622582] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:42.210 [2024-12-06 04:16:29.622650] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:42.210 [2024-12-06 04:16:29.622683] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:42.210 [2024-12-06 04:16:29.622761] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:42.210 [2024-12-06 04:16:29.622792] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:42.210 [2024-12-06 04:16:29.622850] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:42.210 [2024-12-06 04:16:29.622897] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:42.210 [2024-12-06 04:16:29.622915] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:42.210 [2024-12-06 04:16:29.622933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.210 [2024-12-06 04:16:29.622952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:42.210 [2024-12-06 04:16:29.622970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:25:42.210 [2024-12-06 04:16:29.622988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.210 [2024-12-06 04:16:29.623090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.210 [2024-12-06 04:16:29.623115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:42.210 [2024-12-06 04:16:29.623133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:42.210 [2024-12-06 04:16:29.623151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.210 [2024-12-06 04:16:29.623272] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:42.210 [2024-12-06 04:16:29.623297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:42.210 [2024-12-06 04:16:29.623317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:42.210 [2024-12-06 04:16:29.623335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.210 [2024-12-06 04:16:29.623448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:42.210 [2024-12-06 04:16:29.623470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:42.210 [2024-12-06 04:16:29.623489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:42.210 [2024-12-06 04:16:29.623529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:42.210 [2024-12-06 04:16:29.623551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:42.210 [2024-12-06 04:16:29.623575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:42.210 [2024-12-06 04:16:29.623613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:42.210 [2024-12-06 04:16:29.623633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:42.210 [2024-12-06 04:16:29.623651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:42.210 [2024-12-06 04:16:29.623668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:42.210 [2024-12-06 04:16:29.623686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:42.210 [2024-12-06 04:16:29.623704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.210 [2024-12-06 04:16:29.623731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:42.210 [2024-12-06 04:16:29.623750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:42.210 [2024-12-06 04:16:29.623768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.210 [2024-12-06 04:16:29.623817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:42.210 [2024-12-06 04:16:29.623838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:42.210 [2024-12-06 04:16:29.623855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.210 [2024-12-06 04:16:29.623873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:42.210 [2024-12-06 04:16:29.623891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:42.210 [2024-12-06 04:16:29.623908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.210 [2024-12-06 04:16:29.623916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:42.210 [2024-12-06 04:16:29.623923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:42.210 [2024-12-06 04:16:29.623930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.210 [2024-12-06 04:16:29.623937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:42.210 [2024-12-06 04:16:29.623944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:42.210 [2024-12-06 04:16:29.623950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.210 [2024-12-06 04:16:29.623957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:42.210 [2024-12-06 04:16:29.623963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:42.210 [2024-12-06 04:16:29.623970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:42.210 [2024-12-06 04:16:29.623976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:42.210 [2024-12-06 04:16:29.623982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:42.210 [2024-12-06 04:16:29.623989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:42.210 [2024-12-06 04:16:29.623996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:42.210 [2024-12-06 04:16:29.624002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:42.210 [2024-12-06 04:16:29.624008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.210 [2024-12-06 04:16:29.624015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:42.210 [2024-12-06 04:16:29.624022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:42.210 [2024-12-06 04:16:29.624033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.210 [2024-12-06 04:16:29.624040] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:42.210 [2024-12-06 04:16:29.624047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:42.210 [2024-12-06 04:16:29.624057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:42.210 [2024-12-06 04:16:29.624064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.210 [2024-12-06 04:16:29.624071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:42.211 [2024-12-06 04:16:29.624078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:42.211 [2024-12-06 04:16:29.624085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:42.211 [2024-12-06 04:16:29.624092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:42.211 [2024-12-06 04:16:29.624098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:42.211 [2024-12-06 04:16:29.624104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:42.211 [2024-12-06 04:16:29.624113] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:42.211 [2024-12-06 04:16:29.624122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.211 [2024-12-06 04:16:29.624130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:42.211 [2024-12-06 04:16:29.624137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:42.211 [2024-12-06 04:16:29.624144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:42.211 [2024-12-06 04:16:29.624151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:42.211 [2024-12-06 04:16:29.624160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:42.211 [2024-12-06 04:16:29.624167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:42.211 [2024-12-06 04:16:29.624174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:42.211 [2024-12-06 04:16:29.624180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:42.211 [2024-12-06 04:16:29.624187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:42.211 [2024-12-06 04:16:29.624194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:42.211 [2024-12-06 04:16:29.624201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:42.211 [2024-12-06 04:16:29.624208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:42.211 [2024-12-06 04:16:29.624215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:42.211 [2024-12-06 04:16:29.624222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:42.211 [2024-12-06 04:16:29.624229] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:42.211 [2024-12-06 04:16:29.624237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.211 [2024-12-06 04:16:29.624245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:42.211 [2024-12-06 04:16:29.624252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:42.211 [2024-12-06 04:16:29.624259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:42.211 [2024-12-06 04:16:29.624268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:42.211 [2024-12-06 04:16:29.624275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.211 [2024-12-06 04:16:29.624283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:42.211 [2024-12-06 04:16:29.624290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:25:42.211 [2024-12-06 04:16:29.624296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.211 [2024-12-06 04:16:29.649520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.211 [2024-12-06 04:16:29.649557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:42.211 [2024-12-06 04:16:29.649568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.174 ms 00:25:42.211 [2024-12-06 04:16:29.649576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.211 [2024-12-06 04:16:29.649669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.211 [2024-12-06 04:16:29.649677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:42.211 [2024-12-06 04:16:29.649685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:42.211 [2024-12-06 04:16:29.649692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.211 [2024-12-06 04:16:29.698449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.211 [2024-12-06 04:16:29.698519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:42.211 [2024-12-06 04:16:29.698536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.668 ms 00:25:42.211 [2024-12-06 04:16:29.698544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.211 [2024-12-06 04:16:29.698601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.211 [2024-12-06 04:16:29.698611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.211 [2024-12-06 04:16:29.698620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:42.211 [2024-12-06 04:16:29.698627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.211 [2024-12-06 04:16:29.699032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.211 [2024-12-06 04:16:29.699049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.211 [2024-12-06 04:16:29.699059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:25:42.211 [2024-12-06 04:16:29.699071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.211 [2024-12-06 04:16:29.699197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.211 [2024-12-06 04:16:29.699206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.211 [2024-12-06 04:16:29.699214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:25:42.211 [2024-12-06 04:16:29.699221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.211 [2024-12-06 04:16:29.711954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.211 [2024-12-06 04:16:29.712122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.211 [2024-12-06 04:16:29.712139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.714 ms 00:25:42.211 [2024-12-06 04:16:29.712147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.211 [2024-12-06 04:16:29.724483] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:42.211 [2024-12-06 04:16:29.724520] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:42.211 [2024-12-06 04:16:29.724533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.211 [2024-12-06 04:16:29.724541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:42.211 [2024-12-06 04:16:29.724550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.281 ms 00:25:42.212 [2024-12-06 04:16:29.724557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.748796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.469 [2024-12-06 04:16:29.748842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:42.469 [2024-12-06 04:16:29.748853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.195 ms 00:25:42.469 [2024-12-06 04:16:29.748861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.760435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.469 [2024-12-06 04:16:29.760483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:42.469 [2024-12-06 04:16:29.760493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.522 ms 00:25:42.469 [2024-12-06 04:16:29.760500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.771507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.469 [2024-12-06 04:16:29.771669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:42.469 [2024-12-06 04:16:29.771685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.969 ms 00:25:42.469 [2024-12-06 04:16:29.771693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.772316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.469 [2024-12-06 04:16:29.772330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:42.469 [2024-12-06 04:16:29.772339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:25:42.469 [2024-12-06 04:16:29.772347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.826347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.469 [2024-12-06 04:16:29.826698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:42.469 [2024-12-06 04:16:29.826732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.983 ms 00:25:42.469 [2024-12-06 04:16:29.826742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.837180] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:42.469 [2024-12-06 04:16:29.839780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.469 [2024-12-06 04:16:29.839802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:42.469 [2024-12-06 04:16:29.839814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.993 ms 00:25:42.469 [2024-12-06 04:16:29.839826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.839920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.469 [2024-12-06 04:16:29.839931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:42.469 [2024-12-06 04:16:29.839940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:42.469 [2024-12-06 04:16:29.839947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.840010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.469 [2024-12-06 04:16:29.840019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:42.469 [2024-12-06 04:16:29.840028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:42.469 [2024-12-06 04:16:29.840035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.840055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.469 [2024-12-06 04:16:29.840062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:42.469 [2024-12-06 04:16:29.840070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:42.469 [2024-12-06 04:16:29.840077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.840105] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:42.469 [2024-12-06 04:16:29.840114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.469 [2024-12-06 04:16:29.840122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:42.469 [2024-12-06 04:16:29.840130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:42.469 [2024-12-06 04:16:29.840139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.863353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.469 [2024-12-06 04:16:29.863517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:42.469 [2024-12-06 04:16:29.863570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.195 ms 00:25:42.469 [2024-12-06 04:16:29.863593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.863758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.469 [2024-12-06 04:16:29.863806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:42.469 [2024-12-06 04:16:29.863865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:42.469 [2024-12-06 04:16:29.863888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.469 [2024-12-06 04:16:29.864921] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 267.027 ms, result 0 00:25:43.402  [2024-12-06T04:16:32.305Z] Copying: 44/1024 [MB] (44 MBps) [2024-12-06T04:16:32.898Z] Copying: 89/1024 [MB] (45 MBps) [2024-12-06T04:16:34.283Z] Copying: 136/1024 [MB] (47 MBps) [2024-12-06T04:16:35.218Z] Copying: 181/1024 [MB] (44 MBps) [2024-12-06T04:16:36.152Z] Copying: 226/1024 [MB] (44 MBps) [2024-12-06T04:16:37.085Z] Copying: 271/1024 [MB] (45 MBps) [2024-12-06T04:16:38.018Z] Copying: 317/1024 [MB] (45 MBps) [2024-12-06T04:16:38.953Z] Copying: 360/1024 [MB] (43 MBps) [2024-12-06T04:16:39.886Z] Copying: 403/1024 [MB] (43 MBps) [2024-12-06T04:16:41.260Z] Copying: 448/1024 [MB] (44 MBps) [2024-12-06T04:16:42.195Z] Copying: 493/1024 [MB] (44 MBps) [2024-12-06T04:16:43.130Z] Copying: 538/1024 [MB] (45 MBps) [2024-12-06T04:16:44.063Z] Copying: 584/1024 [MB] (45 MBps) [2024-12-06T04:16:45.026Z] Copying: 631/1024 [MB] (47 MBps) [2024-12-06T04:16:45.957Z] Copying: 677/1024 [MB] (45 MBps) [2024-12-06T04:16:46.889Z] Copying: 722/1024 [MB] (44 MBps) [2024-12-06T04:16:48.265Z] Copying: 769/1024 [MB] (46 MBps) [2024-12-06T04:16:49.200Z] Copying: 816/1024 [MB] (47 MBps) [2024-12-06T04:16:50.136Z] Copying: 861/1024 [MB] (45 MBps) [2024-12-06T04:16:51.072Z] Copying: 907/1024 [MB] (45 MBps) [2024-12-06T04:16:52.008Z] Copying: 953/1024 [MB] (45 MBps) [2024-12-06T04:16:52.943Z] Copying: 998/1024 [MB] (45 MBps) [2024-12-06T04:16:53.511Z] Copying: 1023/1024 [MB] (25 MBps) [2024-12-06T04:16:53.511Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-12-06 04:16:53.272106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.984 [2024-12-06 04:16:53.272171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:05.984 [2024-12-06 04:16:53.272188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:05.984 [2024-12-06 04:16:53.272196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.984 [2024-12-06 04:16:53.274207] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:05.984 [2024-12-06 04:16:53.280071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.984 [2024-12-06 04:16:53.280229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:05.984 [2024-12-06 04:16:53.280248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.823 ms 00:26:05.984 [2024-12-06 04:16:53.280263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.984 [2024-12-06 04:16:53.291243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.984 [2024-12-06 04:16:53.291376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:05.984 [2024-12-06 04:16:53.291393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.058 ms 00:26:05.984 [2024-12-06 04:16:53.291402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.984 [2024-12-06 04:16:53.309423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.984 [2024-12-06 04:16:53.309460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:05.984 [2024-12-06 04:16:53.309471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.002 ms 00:26:05.984 [2024-12-06 04:16:53.309479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.984 [2024-12-06 04:16:53.315635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.984 [2024-12-06 04:16:53.315663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:05.984 [2024-12-06 04:16:53.315674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.122 ms 00:26:05.984 [2024-12-06 04:16:53.315682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.984 [2024-12-06 04:16:53.338859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.984 [2024-12-06 04:16:53.338897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:05.984 [2024-12-06 04:16:53.338908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.103 ms 00:26:05.984 [2024-12-06 04:16:53.338915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.984 [2024-12-06 04:16:53.352953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.984 [2024-12-06 04:16:53.353096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:05.984 [2024-12-06 04:16:53.353113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.004 ms 00:26:05.984 [2024-12-06 04:16:53.353121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.984 [2024-12-06 04:16:53.415917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.984 [2024-12-06 04:16:53.416109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:05.984 [2024-12-06 04:16:53.416136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.758 ms 00:26:05.984 [2024-12-06 04:16:53.416144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.984 [2024-12-06 04:16:53.439948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.984 [2024-12-06 04:16:53.439987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:05.984 [2024-12-06 04:16:53.439999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.787 ms 00:26:05.984 [2024-12-06 04:16:53.440015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.984 [2024-12-06 04:16:53.462358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.984 [2024-12-06 04:16:53.462397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:05.984 [2024-12-06 04:16:53.462408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.309 ms 00:26:05.984 [2024-12-06 04:16:53.462416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.984 [2024-12-06 04:16:53.484973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.984 [2024-12-06 04:16:53.485014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:05.984 [2024-12-06 04:16:53.485025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.523 ms 00:26:05.984 [2024-12-06 04:16:53.485032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.244 [2024-12-06 04:16:53.507413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.244 [2024-12-06 04:16:53.507451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:06.244 [2024-12-06 04:16:53.507462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.325 ms 00:26:06.244 [2024-12-06 04:16:53.507469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.244 [2024-12-06 04:16:53.507501] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:06.244 [2024-12-06 04:16:53.507515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 122112 / 261120 wr_cnt: 1 state: open 00:26:06.244 [2024-12-06 04:16:53.507525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:06.244 [2024-12-06 04:16:53.507636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.507993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:06.245 [2024-12-06 04:16:53.508158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:06.246 [2024-12-06 04:16:53.508297] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:06.246 [2024-12-06 04:16:53.508305] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9cb7737c-45c5-4972-932b-0d23e0036544 00:26:06.246 [2024-12-06 04:16:53.508323] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 122112 00:26:06.246 [2024-12-06 04:16:53.508330] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 123072 00:26:06.246 [2024-12-06 04:16:53.508337] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 122112 00:26:06.246 [2024-12-06 04:16:53.508345] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:26:06.246 [2024-12-06 04:16:53.508352] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:06.246 [2024-12-06 04:16:53.508360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:06.246 [2024-12-06 04:16:53.508367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:06.246 [2024-12-06 04:16:53.508373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:06.246 [2024-12-06 04:16:53.508380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:06.246 [2024-12-06 04:16:53.508388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.246 [2024-12-06 04:16:53.508395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:06.246 [2024-12-06 04:16:53.508402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:26:06.246 [2024-12-06 04:16:53.508410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.520801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.246 [2024-12-06 04:16:53.520834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:06.246 [2024-12-06 04:16:53.520846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.375 ms 00:26:06.246 [2024-12-06 04:16:53.520854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.521206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.246 [2024-12-06 04:16:53.521219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:06.246 [2024-12-06 04:16:53.521233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:26:06.246 [2024-12-06 04:16:53.521240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.553630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.246 [2024-12-06 04:16:53.553677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:06.246 [2024-12-06 04:16:53.553687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.246 [2024-12-06 04:16:53.553694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.553770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.246 [2024-12-06 04:16:53.553780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:06.246 [2024-12-06 04:16:53.553792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.246 [2024-12-06 04:16:53.553799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.553876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.246 [2024-12-06 04:16:53.553887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:06.246 [2024-12-06 04:16:53.553895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.246 [2024-12-06 04:16:53.553902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.553916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.246 [2024-12-06 04:16:53.553924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:06.246 [2024-12-06 04:16:53.553932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.246 [2024-12-06 04:16:53.553939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.631554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.246 [2024-12-06 04:16:53.631609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:06.246 [2024-12-06 04:16:53.631620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.246 [2024-12-06 04:16:53.631628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.693790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.246 [2024-12-06 04:16:53.694015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:06.246 [2024-12-06 04:16:53.694032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.246 [2024-12-06 04:16:53.694045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.694100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.246 [2024-12-06 04:16:53.694109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:06.246 [2024-12-06 04:16:53.694117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.246 [2024-12-06 04:16:53.694124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.694172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.246 [2024-12-06 04:16:53.694181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:06.246 [2024-12-06 04:16:53.694189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.246 [2024-12-06 04:16:53.694196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.694290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.246 [2024-12-06 04:16:53.694299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:06.246 [2024-12-06 04:16:53.694308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.246 [2024-12-06 04:16:53.694315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.694343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.246 [2024-12-06 04:16:53.694352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:06.246 [2024-12-06 04:16:53.694359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.246 [2024-12-06 04:16:53.694366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.694401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.246 [2024-12-06 04:16:53.694410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:06.246 [2024-12-06 04:16:53.694417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.246 [2024-12-06 04:16:53.694424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.246 [2024-12-06 04:16:53.694474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.247 [2024-12-06 04:16:53.694484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:06.247 [2024-12-06 04:16:53.694491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.247 [2024-12-06 04:16:53.694499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.247 [2024-12-06 04:16:53.694608] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 423.379 ms, result 0 00:26:08.148 00:26:08.148 00:26:08.148 04:16:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:10.078 04:16:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:10.078 [2024-12-06 04:16:57.519374] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:26:10.078 [2024-12-06 04:16:57.519473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79698 ] 00:26:10.336 [2024-12-06 04:16:57.673836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.336 [2024-12-06 04:16:57.773392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.594 [2024-12-06 04:16:58.031004] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:10.594 [2024-12-06 04:16:58.031072] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:10.853 [2024-12-06 04:16:58.184673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.853 [2024-12-06 04:16:58.184883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:10.853 [2024-12-06 04:16:58.184903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:10.853 [2024-12-06 04:16:58.184912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.853 [2024-12-06 04:16:58.184971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.853 [2024-12-06 04:16:58.184984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:10.853 [2024-12-06 04:16:58.184992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:10.853 [2024-12-06 04:16:58.184999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.853 [2024-12-06 04:16:58.185019] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:10.853 [2024-12-06 04:16:58.185749] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:10.853 [2024-12-06 04:16:58.185765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.853 [2024-12-06 04:16:58.185774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:10.853 [2024-12-06 04:16:58.185782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:26:10.853 [2024-12-06 04:16:58.185790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.853 [2024-12-06 04:16:58.186856] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:10.853 [2024-12-06 04:16:58.199038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.853 [2024-12-06 04:16:58.199073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:10.853 [2024-12-06 04:16:58.199085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.183 ms 00:26:10.853 [2024-12-06 04:16:58.199093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.853 [2024-12-06 04:16:58.199155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.853 [2024-12-06 04:16:58.199165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:10.853 [2024-12-06 04:16:58.199173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:10.853 [2024-12-06 04:16:58.199180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.853 [2024-12-06 04:16:58.204024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.853 [2024-12-06 04:16:58.204172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:10.853 [2024-12-06 04:16:58.204187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.775 ms 00:26:10.853 [2024-12-06 04:16:58.204199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.853 [2024-12-06 04:16:58.204268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.854 [2024-12-06 04:16:58.204276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:10.854 [2024-12-06 04:16:58.204285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:10.854 [2024-12-06 04:16:58.204292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.854 [2024-12-06 04:16:58.204335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.854 [2024-12-06 04:16:58.204345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:10.854 [2024-12-06 04:16:58.204353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:10.854 [2024-12-06 04:16:58.204360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.854 [2024-12-06 04:16:58.204385] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:10.854 [2024-12-06 04:16:58.207756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.854 [2024-12-06 04:16:58.207784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:10.854 [2024-12-06 04:16:58.207796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:26:10.854 [2024-12-06 04:16:58.207804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.854 [2024-12-06 04:16:58.207834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.854 [2024-12-06 04:16:58.207843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:10.854 [2024-12-06 04:16:58.207851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:10.854 [2024-12-06 04:16:58.207858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.854 [2024-12-06 04:16:58.207878] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:10.854 [2024-12-06 04:16:58.207897] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:10.854 [2024-12-06 04:16:58.207930] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:10.854 [2024-12-06 04:16:58.207947] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:10.854 [2024-12-06 04:16:58.208049] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:10.854 [2024-12-06 04:16:58.208059] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:10.854 [2024-12-06 04:16:58.208070] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:10.854 [2024-12-06 04:16:58.208079] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:10.854 [2024-12-06 04:16:58.208088] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:10.854 [2024-12-06 04:16:58.208096] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:10.854 [2024-12-06 04:16:58.208104] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:10.854 [2024-12-06 04:16:58.208113] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:10.854 [2024-12-06 04:16:58.208120] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:10.854 [2024-12-06 04:16:58.208128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.854 [2024-12-06 04:16:58.208137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:10.854 [2024-12-06 04:16:58.208144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:26:10.854 [2024-12-06 04:16:58.208151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.854 [2024-12-06 04:16:58.208233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.854 [2024-12-06 04:16:58.208241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:10.854 [2024-12-06 04:16:58.208249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:10.854 [2024-12-06 04:16:58.208255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.854 [2024-12-06 04:16:58.208372] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:10.854 [2024-12-06 04:16:58.208383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:10.854 [2024-12-06 04:16:58.208391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.854 [2024-12-06 04:16:58.208398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.854 [2024-12-06 04:16:58.208405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:10.854 [2024-12-06 04:16:58.208413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:10.854 [2024-12-06 04:16:58.208420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:10.854 [2024-12-06 04:16:58.208428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:10.854 [2024-12-06 04:16:58.208434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:10.854 [2024-12-06 04:16:58.208441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.854 [2024-12-06 04:16:58.208448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:10.854 [2024-12-06 04:16:58.208455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:10.854 [2024-12-06 04:16:58.208462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.854 [2024-12-06 04:16:58.208473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:10.854 [2024-12-06 04:16:58.208480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:10.854 [2024-12-06 04:16:58.208486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.854 [2024-12-06 04:16:58.208493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:10.854 [2024-12-06 04:16:58.208500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:10.854 [2024-12-06 04:16:58.208506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.854 [2024-12-06 04:16:58.208513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:10.854 [2024-12-06 04:16:58.208520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:10.854 [2024-12-06 04:16:58.208527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.854 [2024-12-06 04:16:58.208534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:10.854 [2024-12-06 04:16:58.208540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:10.854 [2024-12-06 04:16:58.208547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.854 [2024-12-06 04:16:58.208553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:10.854 [2024-12-06 04:16:58.208559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:10.854 [2024-12-06 04:16:58.208566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.854 [2024-12-06 04:16:58.208572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:10.854 [2024-12-06 04:16:58.208579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:10.854 [2024-12-06 04:16:58.208585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.854 [2024-12-06 04:16:58.208592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:10.854 [2024-12-06 04:16:58.208598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:10.854 [2024-12-06 04:16:58.208605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.854 [2024-12-06 04:16:58.208611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:10.854 [2024-12-06 04:16:58.208617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:10.854 [2024-12-06 04:16:58.208624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.854 [2024-12-06 04:16:58.208630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:10.854 [2024-12-06 04:16:58.208636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:10.854 [2024-12-06 04:16:58.208643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.854 [2024-12-06 04:16:58.208649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:10.854 [2024-12-06 04:16:58.208655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:10.854 [2024-12-06 04:16:58.208662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.854 [2024-12-06 04:16:58.208669] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:10.855 [2024-12-06 04:16:58.208677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:10.855 [2024-12-06 04:16:58.208684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.855 [2024-12-06 04:16:58.208690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.855 [2024-12-06 04:16:58.208698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:10.855 [2024-12-06 04:16:58.208704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:10.855 [2024-12-06 04:16:58.208711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:10.855 [2024-12-06 04:16:58.208737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:10.855 [2024-12-06 04:16:58.208744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:10.855 [2024-12-06 04:16:58.208752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:10.855 [2024-12-06 04:16:58.208761] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:10.855 [2024-12-06 04:16:58.208770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.855 [2024-12-06 04:16:58.208781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:10.855 [2024-12-06 04:16:58.208789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:10.855 [2024-12-06 04:16:58.208796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:10.855 [2024-12-06 04:16:58.208804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:10.855 [2024-12-06 04:16:58.208811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:10.855 [2024-12-06 04:16:58.208818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:10.855 [2024-12-06 04:16:58.208826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:10.855 [2024-12-06 04:16:58.208833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:10.855 [2024-12-06 04:16:58.208840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:10.855 [2024-12-06 04:16:58.208847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:10.855 [2024-12-06 04:16:58.208854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:10.855 [2024-12-06 04:16:58.208861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:10.855 [2024-12-06 04:16:58.208868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:10.855 [2024-12-06 04:16:58.208876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:10.855 [2024-12-06 04:16:58.208883] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:10.855 [2024-12-06 04:16:58.208891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.855 [2024-12-06 04:16:58.208898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:10.855 [2024-12-06 04:16:58.208906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:10.855 [2024-12-06 04:16:58.208913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:10.855 [2024-12-06 04:16:58.208920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:10.855 [2024-12-06 04:16:58.208927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.855 [2024-12-06 04:16:58.208934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:10.855 [2024-12-06 04:16:58.208941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:26:10.855 [2024-12-06 04:16:58.208948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.855 [2024-12-06 04:16:58.234480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.855 [2024-12-06 04:16:58.234521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:10.855 [2024-12-06 04:16:58.234532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.487 ms 00:26:10.855 [2024-12-06 04:16:58.234543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.855 [2024-12-06 04:16:58.234629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.855 [2024-12-06 04:16:58.234638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:10.855 [2024-12-06 04:16:58.234645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:10.855 [2024-12-06 04:16:58.234652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.855 [2024-12-06 04:16:58.277051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.855 [2024-12-06 04:16:58.277095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:10.855 [2024-12-06 04:16:58.277108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.339 ms 00:26:10.855 [2024-12-06 04:16:58.277117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.855 [2024-12-06 04:16:58.277168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.855 [2024-12-06 04:16:58.277178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:10.855 [2024-12-06 04:16:58.277189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:10.855 [2024-12-06 04:16:58.277197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.855 [2024-12-06 04:16:58.277554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.855 [2024-12-06 04:16:58.277570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:10.855 [2024-12-06 04:16:58.277579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:26:10.855 [2024-12-06 04:16:58.277587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.855 [2024-12-06 04:16:58.277737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.855 [2024-12-06 04:16:58.277748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:10.855 [2024-12-06 04:16:58.277762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:26:10.855 [2024-12-06 04:16:58.277770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.855 [2024-12-06 04:16:58.290781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.855 [2024-12-06 04:16:58.290921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:10.855 [2024-12-06 04:16:58.290937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.992 ms 00:26:10.855 [2024-12-06 04:16:58.290946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.855 [2024-12-06 04:16:58.303158] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:10.855 [2024-12-06 04:16:58.303192] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:10.855 [2024-12-06 04:16:58.303204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.855 [2024-12-06 04:16:58.303213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:10.855 [2024-12-06 04:16:58.303221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.159 ms 00:26:10.855 [2024-12-06 04:16:58.303228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.855 [2024-12-06 04:16:58.327454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.855 [2024-12-06 04:16:58.327492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:10.855 [2024-12-06 04:16:58.327503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.186 ms 00:26:10.855 [2024-12-06 04:16:58.327511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.855 [2024-12-06 04:16:58.339222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.856 [2024-12-06 04:16:58.339254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:10.856 [2024-12-06 04:16:58.339263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.688 ms 00:26:10.856 [2024-12-06 04:16:58.339270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.856 [2024-12-06 04:16:58.350085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.856 [2024-12-06 04:16:58.350207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:10.856 [2024-12-06 04:16:58.350222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.783 ms 00:26:10.856 [2024-12-06 04:16:58.350229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.856 [2024-12-06 04:16:58.350884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.856 [2024-12-06 04:16:58.350906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:10.856 [2024-12-06 04:16:58.350918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:26:10.856 [2024-12-06 04:16:58.350926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.114 [2024-12-06 04:16:58.405213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.114 [2024-12-06 04:16:58.405420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:11.114 [2024-12-06 04:16:58.405444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.269 ms 00:26:11.114 [2024-12-06 04:16:58.405453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.114 [2024-12-06 04:16:58.415632] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:11.114 [2024-12-06 04:16:58.417990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.114 [2024-12-06 04:16:58.418020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:11.114 [2024-12-06 04:16:58.418033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.495 ms 00:26:11.114 [2024-12-06 04:16:58.418041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.114 [2024-12-06 04:16:58.418129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.114 [2024-12-06 04:16:58.418140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:11.114 [2024-12-06 04:16:58.418151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:11.114 [2024-12-06 04:16:58.418159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.114 [2024-12-06 04:16:58.419537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.114 [2024-12-06 04:16:58.419658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:11.114 [2024-12-06 04:16:58.419674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.339 ms 00:26:11.114 [2024-12-06 04:16:58.419683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.114 [2024-12-06 04:16:58.419710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.114 [2024-12-06 04:16:58.419736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:11.114 [2024-12-06 04:16:58.419745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:11.114 [2024-12-06 04:16:58.419755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.114 [2024-12-06 04:16:58.419792] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:11.114 [2024-12-06 04:16:58.419803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.114 [2024-12-06 04:16:58.419811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:11.114 [2024-12-06 04:16:58.419819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:11.114 [2024-12-06 04:16:58.419826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.114 [2024-12-06 04:16:58.443439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.114 [2024-12-06 04:16:58.443473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:11.114 [2024-12-06 04:16:58.443488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.595 ms 00:26:11.114 [2024-12-06 04:16:58.443496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.115 [2024-12-06 04:16:58.443565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.115 [2024-12-06 04:16:58.443575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:11.115 [2024-12-06 04:16:58.443583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:11.115 [2024-12-06 04:16:58.443591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.115 [2024-12-06 04:16:58.444525] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.447 ms, result 0 00:26:12.489  [2024-12-06T04:17:00.948Z] Copying: 1028/1048576 [kB] (1028 kBps) [2024-12-06T04:17:01.879Z] Copying: 5996/1048576 [kB] (4968 kBps) [2024-12-06T04:17:02.816Z] Copying: 59/1024 [MB] (53 MBps) [2024-12-06T04:17:03.748Z] Copying: 114/1024 [MB] (54 MBps) [2024-12-06T04:17:04.682Z] Copying: 167/1024 [MB] (53 MBps) [2024-12-06T04:17:06.054Z] Copying: 220/1024 [MB] (53 MBps) [2024-12-06T04:17:06.985Z] Copying: 277/1024 [MB] (56 MBps) [2024-12-06T04:17:07.917Z] Copying: 328/1024 [MB] (50 MBps) [2024-12-06T04:17:08.930Z] Copying: 380/1024 [MB] (52 MBps) [2024-12-06T04:17:09.860Z] Copying: 433/1024 [MB] (52 MBps) [2024-12-06T04:17:10.794Z] Copying: 486/1024 [MB] (52 MBps) [2024-12-06T04:17:11.726Z] Copying: 536/1024 [MB] (49 MBps) [2024-12-06T04:17:12.659Z] Copying: 589/1024 [MB] (53 MBps) [2024-12-06T04:17:14.033Z] Copying: 641/1024 [MB] (51 MBps) [2024-12-06T04:17:14.968Z] Copying: 696/1024 [MB] (55 MBps) [2024-12-06T04:17:15.909Z] Copying: 751/1024 [MB] (54 MBps) [2024-12-06T04:17:16.845Z] Copying: 806/1024 [MB] (55 MBps) [2024-12-06T04:17:17.781Z] Copying: 860/1024 [MB] (54 MBps) [2024-12-06T04:17:18.716Z] Copying: 912/1024 [MB] (52 MBps) [2024-12-06T04:17:19.650Z] Copying: 966/1024 [MB] (53 MBps) [2024-12-06T04:17:19.907Z] Copying: 1020/1024 [MB] (54 MBps) [2024-12-06T04:17:21.281Z] Copying: 1024/1024 [MB] (average 48 MBps)[2024-12-06 04:17:21.029159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.754 [2024-12-06 04:17:21.029468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:33.754 [2024-12-06 04:17:21.029565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:33.754 [2024-12-06 04:17:21.029686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.754 [2024-12-06 04:17:21.029783] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:33.754 [2024-12-06 04:17:21.033825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.754 [2024-12-06 04:17:21.033979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:33.754 [2024-12-06 04:17:21.034062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.905 ms 00:26:33.754 [2024-12-06 04:17:21.034098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.754 [2024-12-06 04:17:21.034619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.754 [2024-12-06 04:17:21.034766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:33.754 [2024-12-06 04:17:21.034851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:26:33.754 [2024-12-06 04:17:21.034887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.754 [2024-12-06 04:17:21.047037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.754 [2024-12-06 04:17:21.047148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:33.754 [2024-12-06 04:17:21.047201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.070 ms 00:26:33.754 [2024-12-06 04:17:21.047224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.754 [2024-12-06 04:17:21.053412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.755 [2024-12-06 04:17:21.053515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:33.755 [2024-12-06 04:17:21.053573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.140 ms 00:26:33.755 [2024-12-06 04:17:21.053597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.755 [2024-12-06 04:17:21.077440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.755 [2024-12-06 04:17:21.077563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:33.755 [2024-12-06 04:17:21.077614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.749 ms 00:26:33.755 [2024-12-06 04:17:21.077636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.755 [2024-12-06 04:17:21.092130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.755 [2024-12-06 04:17:21.092248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:33.755 [2024-12-06 04:17:21.092298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.383 ms 00:26:33.755 [2024-12-06 04:17:21.092320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.755 [2024-12-06 04:17:21.094208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.755 [2024-12-06 04:17:21.094299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:33.755 [2024-12-06 04:17:21.094346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.853 ms 00:26:33.755 [2024-12-06 04:17:21.094375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.755 [2024-12-06 04:17:21.117134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.755 [2024-12-06 04:17:21.117260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:33.755 [2024-12-06 04:17:21.117308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.732 ms 00:26:33.755 [2024-12-06 04:17:21.117329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.755 [2024-12-06 04:17:21.139616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.755 [2024-12-06 04:17:21.139746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:33.755 [2024-12-06 04:17:21.139795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.248 ms 00:26:33.755 [2024-12-06 04:17:21.139816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.755 [2024-12-06 04:17:21.162233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.755 [2024-12-06 04:17:21.162346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:33.755 [2024-12-06 04:17:21.162393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.379 ms 00:26:33.755 [2024-12-06 04:17:21.162413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.755 [2024-12-06 04:17:21.184547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.755 [2024-12-06 04:17:21.184661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:33.755 [2024-12-06 04:17:21.184706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.066 ms 00:26:33.755 [2024-12-06 04:17:21.184743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.755 [2024-12-06 04:17:21.184798] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:33.755 [2024-12-06 04:17:21.184828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:33.755 [2024-12-06 04:17:21.184892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:33.755 [2024-12-06 04:17:21.184953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.184985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.185959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:33.755 [2024-12-06 04:17:21.186240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:33.756 [2024-12-06 04:17:21.186541] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:33.756 [2024-12-06 04:17:21.186548] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9cb7737c-45c5-4972-932b-0d23e0036544 00:26:33.756 [2024-12-06 04:17:21.186556] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:33.756 [2024-12-06 04:17:21.186563] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 142528 00:26:33.756 [2024-12-06 04:17:21.186574] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 140544 00:26:33.756 [2024-12-06 04:17:21.186582] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0141 00:26:33.756 [2024-12-06 04:17:21.186589] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:33.756 [2024-12-06 04:17:21.186602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:33.756 [2024-12-06 04:17:21.186609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:33.756 [2024-12-06 04:17:21.186615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:33.756 [2024-12-06 04:17:21.186622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:33.756 [2024-12-06 04:17:21.186629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.756 [2024-12-06 04:17:21.186637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:33.756 [2024-12-06 04:17:21.186644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.832 ms 00:26:33.756 [2024-12-06 04:17:21.186651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.756 [2024-12-06 04:17:21.199074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.756 [2024-12-06 04:17:21.199108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:33.756 [2024-12-06 04:17:21.199119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.403 ms 00:26:33.756 [2024-12-06 04:17:21.199126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.756 [2024-12-06 04:17:21.199479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.756 [2024-12-06 04:17:21.199492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:33.756 [2024-12-06 04:17:21.199500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:26:33.756 [2024-12-06 04:17:21.199507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.756 [2024-12-06 04:17:21.231901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.756 [2024-12-06 04:17:21.232018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:33.756 [2024-12-06 04:17:21.232041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.756 [2024-12-06 04:17:21.232049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.756 [2024-12-06 04:17:21.232105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.756 [2024-12-06 04:17:21.232114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:33.756 [2024-12-06 04:17:21.232121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.756 [2024-12-06 04:17:21.232128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.756 [2024-12-06 04:17:21.232201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.756 [2024-12-06 04:17:21.232211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:33.756 [2024-12-06 04:17:21.232219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.756 [2024-12-06 04:17:21.232226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.756 [2024-12-06 04:17:21.232240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.756 [2024-12-06 04:17:21.232248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:33.756 [2024-12-06 04:17:21.232255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.756 [2024-12-06 04:17:21.232262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.063 [2024-12-06 04:17:21.308981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.063 [2024-12-06 04:17:21.309027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:34.063 [2024-12-06 04:17:21.309039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.063 [2024-12-06 04:17:21.309046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.063 [2024-12-06 04:17:21.371784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.063 [2024-12-06 04:17:21.371829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:34.063 [2024-12-06 04:17:21.371839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.063 [2024-12-06 04:17:21.371847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.063 [2024-12-06 04:17:21.371895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.063 [2024-12-06 04:17:21.371909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:34.064 [2024-12-06 04:17:21.371917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.064 [2024-12-06 04:17:21.371924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.064 [2024-12-06 04:17:21.371970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.064 [2024-12-06 04:17:21.371979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:34.064 [2024-12-06 04:17:21.371986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.064 [2024-12-06 04:17:21.371994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.064 [2024-12-06 04:17:21.372077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.064 [2024-12-06 04:17:21.372086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:34.064 [2024-12-06 04:17:21.372097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.064 [2024-12-06 04:17:21.372104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.064 [2024-12-06 04:17:21.372130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.064 [2024-12-06 04:17:21.372139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:34.064 [2024-12-06 04:17:21.372146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.064 [2024-12-06 04:17:21.372154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.064 [2024-12-06 04:17:21.372184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.064 [2024-12-06 04:17:21.372192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:34.064 [2024-12-06 04:17:21.372202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.064 [2024-12-06 04:17:21.372209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.064 [2024-12-06 04:17:21.372245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.064 [2024-12-06 04:17:21.372255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:34.064 [2024-12-06 04:17:21.372263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.064 [2024-12-06 04:17:21.372270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.064 [2024-12-06 04:17:21.372375] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.213 ms, result 0 00:26:35.438 00:26:35.438 00:26:35.438 04:17:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:37.338 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:37.338 04:17:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:37.597 [2024-12-06 04:17:24.925204] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:26:37.597 [2024-12-06 04:17:24.925505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79979 ] 00:26:37.597 [2024-12-06 04:17:25.088887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.855 [2024-12-06 04:17:25.183994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.113 [2024-12-06 04:17:25.438160] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:38.113 [2024-12-06 04:17:25.438220] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:38.113 [2024-12-06 04:17:25.592083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.113 [2024-12-06 04:17:25.592133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:38.113 [2024-12-06 04:17:25.592146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:38.113 [2024-12-06 04:17:25.592154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.113 [2024-12-06 04:17:25.592195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.113 [2024-12-06 04:17:25.592207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:38.113 [2024-12-06 04:17:25.592215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:38.113 [2024-12-06 04:17:25.592222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.113 [2024-12-06 04:17:25.592239] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:38.113 [2024-12-06 04:17:25.592941] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:38.113 [2024-12-06 04:17:25.592956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.113 [2024-12-06 04:17:25.592963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:38.113 [2024-12-06 04:17:25.592971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:26:38.113 [2024-12-06 04:17:25.592979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.113 [2024-12-06 04:17:25.593977] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:38.113 [2024-12-06 04:17:25.606260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.113 [2024-12-06 04:17:25.606393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:38.113 [2024-12-06 04:17:25.606411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.284 ms 00:26:38.114 [2024-12-06 04:17:25.606420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.114 [2024-12-06 04:17:25.606488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.114 [2024-12-06 04:17:25.606498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:38.114 [2024-12-06 04:17:25.606507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:38.114 [2024-12-06 04:17:25.606514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.114 [2024-12-06 04:17:25.611145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.114 [2024-12-06 04:17:25.611179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:38.114 [2024-12-06 04:17:25.611194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.566 ms 00:26:38.114 [2024-12-06 04:17:25.611210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.114 [2024-12-06 04:17:25.611294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.114 [2024-12-06 04:17:25.611303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:38.114 [2024-12-06 04:17:25.611311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:38.114 [2024-12-06 04:17:25.611318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.114 [2024-12-06 04:17:25.611356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.114 [2024-12-06 04:17:25.611365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:38.114 [2024-12-06 04:17:25.611373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:38.114 [2024-12-06 04:17:25.611379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.114 [2024-12-06 04:17:25.611402] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:38.114 [2024-12-06 04:17:25.614564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.114 [2024-12-06 04:17:25.614590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:38.114 [2024-12-06 04:17:25.614601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.167 ms 00:26:38.114 [2024-12-06 04:17:25.614608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.114 [2024-12-06 04:17:25.614637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.114 [2024-12-06 04:17:25.614646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:38.114 [2024-12-06 04:17:25.614654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:38.114 [2024-12-06 04:17:25.614661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.114 [2024-12-06 04:17:25.614679] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:38.114 [2024-12-06 04:17:25.614697] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:38.114 [2024-12-06 04:17:25.614743] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:38.114 [2024-12-06 04:17:25.614761] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:38.114 [2024-12-06 04:17:25.614863] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:38.114 [2024-12-06 04:17:25.614873] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:38.114 [2024-12-06 04:17:25.614883] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:38.114 [2024-12-06 04:17:25.614893] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:38.114 [2024-12-06 04:17:25.614902] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:38.114 [2024-12-06 04:17:25.614910] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:38.114 [2024-12-06 04:17:25.614917] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:38.114 [2024-12-06 04:17:25.614926] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:38.114 [2024-12-06 04:17:25.614933] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:38.114 [2024-12-06 04:17:25.614940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.114 [2024-12-06 04:17:25.614948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:38.114 [2024-12-06 04:17:25.614956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:26:38.114 [2024-12-06 04:17:25.614963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.114 [2024-12-06 04:17:25.615044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.114 [2024-12-06 04:17:25.615053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:38.114 [2024-12-06 04:17:25.615060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:38.114 [2024-12-06 04:17:25.615067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.114 [2024-12-06 04:17:25.615169] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:38.114 [2024-12-06 04:17:25.615179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:38.114 [2024-12-06 04:17:25.615186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:38.114 [2024-12-06 04:17:25.615194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:38.114 [2024-12-06 04:17:25.615209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:38.114 [2024-12-06 04:17:25.615223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:38.114 [2024-12-06 04:17:25.615230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:38.114 [2024-12-06 04:17:25.615244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:38.114 [2024-12-06 04:17:25.615250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:38.114 [2024-12-06 04:17:25.615258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:38.114 [2024-12-06 04:17:25.615270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:38.114 [2024-12-06 04:17:25.615277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:38.114 [2024-12-06 04:17:25.615283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:38.114 [2024-12-06 04:17:25.615297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:38.114 [2024-12-06 04:17:25.615303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:38.114 [2024-12-06 04:17:25.615316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.114 [2024-12-06 04:17:25.615329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:38.114 [2024-12-06 04:17:25.615335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.114 [2024-12-06 04:17:25.615348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:38.114 [2024-12-06 04:17:25.615354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.114 [2024-12-06 04:17:25.615366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:38.114 [2024-12-06 04:17:25.615373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.114 [2024-12-06 04:17:25.615385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:38.114 [2024-12-06 04:17:25.615391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:38.114 [2024-12-06 04:17:25.615403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:38.114 [2024-12-06 04:17:25.615410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:38.114 [2024-12-06 04:17:25.615416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:38.114 [2024-12-06 04:17:25.615422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:38.114 [2024-12-06 04:17:25.615429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:38.114 [2024-12-06 04:17:25.615435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:38.114 [2024-12-06 04:17:25.615448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:38.114 [2024-12-06 04:17:25.615454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615460] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:38.114 [2024-12-06 04:17:25.615468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:38.114 [2024-12-06 04:17:25.615475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:38.114 [2024-12-06 04:17:25.615482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.114 [2024-12-06 04:17:25.615490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:38.114 [2024-12-06 04:17:25.615496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:38.114 [2024-12-06 04:17:25.615503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:38.114 [2024-12-06 04:17:25.615509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:38.114 [2024-12-06 04:17:25.615516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:38.114 [2024-12-06 04:17:25.615522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:38.115 [2024-12-06 04:17:25.615530] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:38.115 [2024-12-06 04:17:25.615539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:38.115 [2024-12-06 04:17:25.615549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:38.115 [2024-12-06 04:17:25.615557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:38.115 [2024-12-06 04:17:25.615564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:38.115 [2024-12-06 04:17:25.615571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:38.115 [2024-12-06 04:17:25.615577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:38.115 [2024-12-06 04:17:25.615585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:38.115 [2024-12-06 04:17:25.615591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:38.115 [2024-12-06 04:17:25.615598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:38.115 [2024-12-06 04:17:25.615605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:38.115 [2024-12-06 04:17:25.615612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:38.115 [2024-12-06 04:17:25.615619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:38.115 [2024-12-06 04:17:25.615626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:38.115 [2024-12-06 04:17:25.615633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:38.115 [2024-12-06 04:17:25.615641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:38.115 [2024-12-06 04:17:25.615647] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:38.115 [2024-12-06 04:17:25.615655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:38.115 [2024-12-06 04:17:25.615663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:38.115 [2024-12-06 04:17:25.615670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:38.115 [2024-12-06 04:17:25.615677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:38.115 [2024-12-06 04:17:25.615684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:38.115 [2024-12-06 04:17:25.615691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.115 [2024-12-06 04:17:25.615698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:38.115 [2024-12-06 04:17:25.615705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:26:38.115 [2024-12-06 04:17:25.615712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.371 [2024-12-06 04:17:25.640837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.371 [2024-12-06 04:17:25.640978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:38.371 [2024-12-06 04:17:25.640994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.060 ms 00:26:38.371 [2024-12-06 04:17:25.641006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.371 [2024-12-06 04:17:25.641089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.371 [2024-12-06 04:17:25.641097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:38.371 [2024-12-06 04:17:25.641105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:38.371 [2024-12-06 04:17:25.641112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.371 [2024-12-06 04:17:25.685841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.371 [2024-12-06 04:17:25.685882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:38.371 [2024-12-06 04:17:25.685894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.678 ms 00:26:38.371 [2024-12-06 04:17:25.685902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.371 [2024-12-06 04:17:25.685943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.371 [2024-12-06 04:17:25.685953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:38.371 [2024-12-06 04:17:25.685964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:38.371 [2024-12-06 04:17:25.685971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.371 [2024-12-06 04:17:25.686323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.371 [2024-12-06 04:17:25.686338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:38.371 [2024-12-06 04:17:25.686347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:26:38.371 [2024-12-06 04:17:25.686355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.371 [2024-12-06 04:17:25.686485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.371 [2024-12-06 04:17:25.686494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:38.371 [2024-12-06 04:17:25.686506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:26:38.371 [2024-12-06 04:17:25.686514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.371 [2024-12-06 04:17:25.699179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.371 [2024-12-06 04:17:25.699212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:38.371 [2024-12-06 04:17:25.699222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.646 ms 00:26:38.371 [2024-12-06 04:17:25.699229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.371 [2024-12-06 04:17:25.711526] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:38.371 [2024-12-06 04:17:25.711558] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:38.371 [2024-12-06 04:17:25.711570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.371 [2024-12-06 04:17:25.711577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:38.371 [2024-12-06 04:17:25.711586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.254 ms 00:26:38.371 [2024-12-06 04:17:25.711593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.371 [2024-12-06 04:17:25.735760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.371 [2024-12-06 04:17:25.735805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:38.371 [2024-12-06 04:17:25.735816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.130 ms 00:26:38.371 [2024-12-06 04:17:25.735823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.371 [2024-12-06 04:17:25.747426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.371 [2024-12-06 04:17:25.747550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:38.371 [2024-12-06 04:17:25.747566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.560 ms 00:26:38.371 [2024-12-06 04:17:25.747573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.371 [2024-12-06 04:17:25.758623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.371 [2024-12-06 04:17:25.758742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:38.371 [2024-12-06 04:17:25.758756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.023 ms 00:26:38.371 [2024-12-06 04:17:25.758764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.372 [2024-12-06 04:17:25.759347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.372 [2024-12-06 04:17:25.759365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:38.372 [2024-12-06 04:17:25.759376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:26:38.372 [2024-12-06 04:17:25.759384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.372 [2024-12-06 04:17:25.812916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.372 [2024-12-06 04:17:25.812970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:38.372 [2024-12-06 04:17:25.812988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.514 ms 00:26:38.372 [2024-12-06 04:17:25.812996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.372 [2024-12-06 04:17:25.823195] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:38.372 [2024-12-06 04:17:25.825445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.372 [2024-12-06 04:17:25.825472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:38.372 [2024-12-06 04:17:25.825484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.401 ms 00:26:38.372 [2024-12-06 04:17:25.825491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.372 [2024-12-06 04:17:25.825575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.372 [2024-12-06 04:17:25.825585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:38.372 [2024-12-06 04:17:25.825597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:38.372 [2024-12-06 04:17:25.825604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.372 [2024-12-06 04:17:25.826143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.372 [2024-12-06 04:17:25.826203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:38.372 [2024-12-06 04:17:25.826215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:26:38.372 [2024-12-06 04:17:25.826222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.372 [2024-12-06 04:17:25.826246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.372 [2024-12-06 04:17:25.826254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:38.372 [2024-12-06 04:17:25.826261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:38.372 [2024-12-06 04:17:25.826268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.372 [2024-12-06 04:17:25.826302] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:38.372 [2024-12-06 04:17:25.826312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.372 [2024-12-06 04:17:25.826319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:38.372 [2024-12-06 04:17:25.826327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:38.372 [2024-12-06 04:17:25.826334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.372 [2024-12-06 04:17:25.848928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.372 [2024-12-06 04:17:25.849047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:38.372 [2024-12-06 04:17:25.849068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.578 ms 00:26:38.372 [2024-12-06 04:17:25.849075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.372 [2024-12-06 04:17:25.849136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.372 [2024-12-06 04:17:25.849145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:38.372 [2024-12-06 04:17:25.849153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:38.372 [2024-12-06 04:17:25.849160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.372 [2024-12-06 04:17:25.850123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 257.631 ms, result 0 00:26:39.742  [2024-12-06T04:17:28.200Z] Copying: 46/1024 [MB] (46 MBps) [2024-12-06T04:17:29.133Z] Copying: 88/1024 [MB] (42 MBps) [2024-12-06T04:17:30.068Z] Copying: 138/1024 [MB] (49 MBps) [2024-12-06T04:17:31.443Z] Copying: 188/1024 [MB] (50 MBps) [2024-12-06T04:17:32.379Z] Copying: 238/1024 [MB] (49 MBps) [2024-12-06T04:17:33.312Z] Copying: 285/1024 [MB] (47 MBps) [2024-12-06T04:17:34.301Z] Copying: 334/1024 [MB] (48 MBps) [2024-12-06T04:17:35.261Z] Copying: 384/1024 [MB] (50 MBps) [2024-12-06T04:17:36.193Z] Copying: 432/1024 [MB] (48 MBps) [2024-12-06T04:17:37.126Z] Copying: 480/1024 [MB] (48 MBps) [2024-12-06T04:17:38.059Z] Copying: 526/1024 [MB] (45 MBps) [2024-12-06T04:17:39.432Z] Copying: 575/1024 [MB] (49 MBps) [2024-12-06T04:17:40.381Z] Copying: 625/1024 [MB] (49 MBps) [2024-12-06T04:17:41.314Z] Copying: 673/1024 [MB] (47 MBps) [2024-12-06T04:17:42.249Z] Copying: 722/1024 [MB] (48 MBps) [2024-12-06T04:17:43.184Z] Copying: 767/1024 [MB] (45 MBps) [2024-12-06T04:17:44.119Z] Copying: 815/1024 [MB] (48 MBps) [2024-12-06T04:17:45.052Z] Copying: 861/1024 [MB] (45 MBps) [2024-12-06T04:17:46.424Z] Copying: 911/1024 [MB] (50 MBps) [2024-12-06T04:17:47.366Z] Copying: 958/1024 [MB] (46 MBps) [2024-12-06T04:17:47.634Z] Copying: 1006/1024 [MB] (48 MBps) [2024-12-06T04:17:47.634Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-12-06 04:17:47.580613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.107 [2024-12-06 04:17:47.580699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:00.107 [2024-12-06 04:17:47.580763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:00.107 [2024-12-06 04:17:47.580787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.107 [2024-12-06 04:17:47.580837] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:00.107 [2024-12-06 04:17:47.586569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.107 [2024-12-06 04:17:47.586635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:00.107 [2024-12-06 04:17:47.586661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.699 ms 00:27:00.107 [2024-12-06 04:17:47.586683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.107 [2024-12-06 04:17:47.587183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.107 [2024-12-06 04:17:47.587226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:00.107 [2024-12-06 04:17:47.587250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:27:00.107 [2024-12-06 04:17:47.587272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.107 [2024-12-06 04:17:47.592330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.107 [2024-12-06 04:17:47.592449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:00.107 [2024-12-06 04:17:47.592468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.020 ms 00:27:00.107 [2024-12-06 04:17:47.592486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.107 [2024-12-06 04:17:47.598812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.107 [2024-12-06 04:17:47.598913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:00.107 [2024-12-06 04:17:47.598994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.297 ms 00:27:00.107 [2024-12-06 04:17:47.599033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.107 [2024-12-06 04:17:47.623263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.107 [2024-12-06 04:17:47.623455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:00.107 [2024-12-06 04:17:47.623533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.029 ms 00:27:00.107 [2024-12-06 04:17:47.623567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.367 [2024-12-06 04:17:47.637532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.367 [2024-12-06 04:17:47.637724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:00.367 [2024-12-06 04:17:47.637804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.908 ms 00:27:00.367 [2024-12-06 04:17:47.637840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.367 [2024-12-06 04:17:47.639927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.367 [2024-12-06 04:17:47.640062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:00.367 [2024-12-06 04:17:47.640145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.023 ms 00:27:00.367 [2024-12-06 04:17:47.640251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.367 [2024-12-06 04:17:47.663192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.367 [2024-12-06 04:17:47.663329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:00.367 [2024-12-06 04:17:47.663402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.887 ms 00:27:00.367 [2024-12-06 04:17:47.663437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.367 [2024-12-06 04:17:47.685815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.367 [2024-12-06 04:17:47.685924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:00.367 [2024-12-06 04:17:47.686036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.331 ms 00:27:00.367 [2024-12-06 04:17:47.686074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.367 [2024-12-06 04:17:47.707940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.367 [2024-12-06 04:17:47.708059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:00.367 [2024-12-06 04:17:47.708133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.769 ms 00:27:00.367 [2024-12-06 04:17:47.708169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.367 [2024-12-06 04:17:47.730612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.367 [2024-12-06 04:17:47.730730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:00.367 [2024-12-06 04:17:47.730808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.279 ms 00:27:00.367 [2024-12-06 04:17:47.730844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.367 [2024-12-06 04:17:47.730891] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:00.367 [2024-12-06 04:17:47.731048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:00.367 [2024-12-06 04:17:47.731112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:00.367 [2024-12-06 04:17:47.731211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.731266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.731317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.731419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.731471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.731616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.731712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.731822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.731878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.731966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.732998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:00.367 [2024-12-06 04:17:47.733204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:00.368 [2024-12-06 04:17:47.733939] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:00.368 [2024-12-06 04:17:47.733953] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9cb7737c-45c5-4972-932b-0d23e0036544 00:27:00.368 [2024-12-06 04:17:47.733966] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:00.368 [2024-12-06 04:17:47.733978] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:00.368 [2024-12-06 04:17:47.733990] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:00.368 [2024-12-06 04:17:47.734002] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:00.368 [2024-12-06 04:17:47.734022] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:00.368 [2024-12-06 04:17:47.734035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:00.368 [2024-12-06 04:17:47.734048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:00.368 [2024-12-06 04:17:47.734058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:00.368 [2024-12-06 04:17:47.734069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:00.368 [2024-12-06 04:17:47.734082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.368 [2024-12-06 04:17:47.734095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:00.368 [2024-12-06 04:17:47.734109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.192 ms 00:27:00.368 [2024-12-06 04:17:47.734124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.368 [2024-12-06 04:17:47.747464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.368 [2024-12-06 04:17:47.747498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:00.368 [2024-12-06 04:17:47.747513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.312 ms 00:27:00.368 [2024-12-06 04:17:47.747524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.368 [2024-12-06 04:17:47.747993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.368 [2024-12-06 04:17:47.748026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:00.368 [2024-12-06 04:17:47.748040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:27:00.368 [2024-12-06 04:17:47.748050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.368 [2024-12-06 04:17:47.780374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.368 [2024-12-06 04:17:47.780421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:00.368 [2024-12-06 04:17:47.780437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.368 [2024-12-06 04:17:47.780449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.368 [2024-12-06 04:17:47.780531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.368 [2024-12-06 04:17:47.780549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:00.368 [2024-12-06 04:17:47.780561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.368 [2024-12-06 04:17:47.780571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.368 [2024-12-06 04:17:47.780662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.368 [2024-12-06 04:17:47.780682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:00.368 [2024-12-06 04:17:47.780696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.368 [2024-12-06 04:17:47.780707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.368 [2024-12-06 04:17:47.780740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.368 [2024-12-06 04:17:47.780754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:00.368 [2024-12-06 04:17:47.780770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.368 [2024-12-06 04:17:47.780782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.368 [2024-12-06 04:17:47.856057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.368 [2024-12-06 04:17:47.856111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:00.368 [2024-12-06 04:17:47.856127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.368 [2024-12-06 04:17:47.856138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.626 [2024-12-06 04:17:47.917514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.626 [2024-12-06 04:17:47.917568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:00.626 [2024-12-06 04:17:47.917583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.626 [2024-12-06 04:17:47.917594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.626 [2024-12-06 04:17:47.917686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.626 [2024-12-06 04:17:47.917700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:00.626 [2024-12-06 04:17:47.917713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.626 [2024-12-06 04:17:47.917746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.626 [2024-12-06 04:17:47.917789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.626 [2024-12-06 04:17:47.917803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:00.626 [2024-12-06 04:17:47.917816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.626 [2024-12-06 04:17:47.917831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.626 [2024-12-06 04:17:47.917955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.626 [2024-12-06 04:17:47.917969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:00.626 [2024-12-06 04:17:47.917982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.626 [2024-12-06 04:17:47.917994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.626 [2024-12-06 04:17:47.918037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.626 [2024-12-06 04:17:47.918051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:00.626 [2024-12-06 04:17:47.918064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.626 [2024-12-06 04:17:47.918075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.626 [2024-12-06 04:17:47.918124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.626 [2024-12-06 04:17:47.918138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:00.627 [2024-12-06 04:17:47.918150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.627 [2024-12-06 04:17:47.918162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.627 [2024-12-06 04:17:47.918216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.627 [2024-12-06 04:17:47.918230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:00.627 [2024-12-06 04:17:47.918243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.627 [2024-12-06 04:17:47.918258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.627 [2024-12-06 04:17:47.918405] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.762 ms, result 0 00:27:01.191 00:27:01.191 00:27:01.191 04:17:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:03.087 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:03.087 04:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:03.087 04:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:03.087 04:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:03.087 04:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:03.344 04:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:03.344 04:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:03.344 04:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:03.344 Process with pid 78788 is not found 00:27:03.344 04:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78788 00:27:03.344 04:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 78788 ']' 00:27:03.344 04:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 78788 00:27:03.344 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78788) - No such process 00:27:03.344 04:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 78788 is not found' 00:27:03.344 04:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:03.601 Remove shared memory files 00:27:03.601 04:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:03.601 04:17:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:03.601 04:17:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:03.601 04:17:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:03.601 04:17:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:03.601 04:17:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:03.601 04:17:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:03.601 ************************************ 00:27:03.601 END TEST ftl_dirty_shutdown 00:27:03.601 ************************************ 00:27:03.601 00:27:03.601 real 2m15.924s 00:27:03.601 user 2m32.505s 00:27:03.601 sys 0m21.876s 00:27:03.601 04:17:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:03.601 04:17:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:03.601 04:17:51 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:03.601 04:17:51 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:03.601 04:17:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:03.601 04:17:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:03.601 ************************************ 00:27:03.601 START TEST ftl_upgrade_shutdown 00:27:03.601 ************************************ 00:27:03.601 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:03.601 * Looking for test storage... 00:27:03.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:03.601 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:03.601 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:27:03.601 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:03.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.860 --rc genhtml_branch_coverage=1 00:27:03.860 --rc genhtml_function_coverage=1 00:27:03.860 --rc genhtml_legend=1 00:27:03.860 --rc geninfo_all_blocks=1 00:27:03.860 --rc geninfo_unexecuted_blocks=1 00:27:03.860 00:27:03.860 ' 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:03.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.860 --rc genhtml_branch_coverage=1 00:27:03.860 --rc genhtml_function_coverage=1 00:27:03.860 --rc genhtml_legend=1 00:27:03.860 --rc geninfo_all_blocks=1 00:27:03.860 --rc geninfo_unexecuted_blocks=1 00:27:03.860 00:27:03.860 ' 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:03.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.860 --rc genhtml_branch_coverage=1 00:27:03.860 --rc genhtml_function_coverage=1 00:27:03.860 --rc genhtml_legend=1 00:27:03.860 --rc geninfo_all_blocks=1 00:27:03.860 --rc geninfo_unexecuted_blocks=1 00:27:03.860 00:27:03.860 ' 00:27:03.860 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:03.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.861 --rc genhtml_branch_coverage=1 00:27:03.861 --rc genhtml_function_coverage=1 00:27:03.861 --rc genhtml_legend=1 00:27:03.861 --rc geninfo_all_blocks=1 00:27:03.861 --rc geninfo_unexecuted_blocks=1 00:27:03.861 00:27:03.861 ' 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80321 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80321 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80321 ']' 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:03.861 04:17:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:03.861 [2024-12-06 04:17:51.270314] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:27:03.861 [2024-12-06 04:17:51.270574] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80321 ] 00:27:04.118 [2024-12-06 04:17:51.432266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.118 [2024-12-06 04:17:51.527636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.682 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.682 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:04.682 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:04.682 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:04.682 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:04.683 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:04.940 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:04.940 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:04.940 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:04.940 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:27:04.940 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:04.940 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:04.940 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:04.940 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:05.198 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:05.198 { 00:27:05.198 "name": "basen1", 00:27:05.198 "aliases": [ 00:27:05.198 "db1cd424-9de4-40a1-9e3e-d4e18809ad91" 00:27:05.198 ], 00:27:05.198 "product_name": "NVMe disk", 00:27:05.198 "block_size": 4096, 00:27:05.198 "num_blocks": 1310720, 00:27:05.198 "uuid": "db1cd424-9de4-40a1-9e3e-d4e18809ad91", 00:27:05.198 "numa_id": -1, 00:27:05.198 "assigned_rate_limits": { 00:27:05.198 "rw_ios_per_sec": 0, 00:27:05.198 "rw_mbytes_per_sec": 0, 00:27:05.198 "r_mbytes_per_sec": 0, 00:27:05.198 "w_mbytes_per_sec": 0 00:27:05.198 }, 00:27:05.198 "claimed": true, 00:27:05.198 "claim_type": "read_many_write_one", 00:27:05.198 "zoned": false, 00:27:05.198 "supported_io_types": { 00:27:05.198 "read": true, 00:27:05.198 "write": true, 00:27:05.198 "unmap": true, 00:27:05.198 "flush": true, 00:27:05.198 "reset": true, 00:27:05.198 "nvme_admin": true, 00:27:05.198 "nvme_io": true, 00:27:05.198 "nvme_io_md": false, 00:27:05.198 "write_zeroes": true, 00:27:05.198 "zcopy": false, 00:27:05.198 "get_zone_info": false, 00:27:05.198 "zone_management": false, 00:27:05.198 "zone_append": false, 00:27:05.198 "compare": true, 00:27:05.198 "compare_and_write": false, 00:27:05.198 "abort": true, 00:27:05.198 "seek_hole": false, 00:27:05.198 "seek_data": false, 00:27:05.198 "copy": true, 00:27:05.198 "nvme_iov_md": false 00:27:05.198 }, 00:27:05.198 "driver_specific": { 00:27:05.198 "nvme": [ 00:27:05.198 { 00:27:05.198 "pci_address": "0000:00:11.0", 00:27:05.198 "trid": { 00:27:05.198 "trtype": "PCIe", 00:27:05.198 "traddr": "0000:00:11.0" 00:27:05.198 }, 00:27:05.198 "ctrlr_data": { 00:27:05.198 "cntlid": 0, 00:27:05.198 "vendor_id": "0x1b36", 00:27:05.198 "model_number": "QEMU NVMe Ctrl", 00:27:05.198 "serial_number": "12341", 00:27:05.198 "firmware_revision": "8.0.0", 00:27:05.198 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:05.198 "oacs": { 00:27:05.198 "security": 0, 00:27:05.198 "format": 1, 00:27:05.198 "firmware": 0, 00:27:05.198 "ns_manage": 1 00:27:05.198 }, 00:27:05.198 "multi_ctrlr": false, 00:27:05.198 "ana_reporting": false 00:27:05.198 }, 00:27:05.198 "vs": { 00:27:05.198 "nvme_version": "1.4" 00:27:05.198 }, 00:27:05.198 "ns_data": { 00:27:05.198 "id": 1, 00:27:05.198 "can_share": false 00:27:05.198 } 00:27:05.198 } 00:27:05.198 ], 00:27:05.198 "mp_policy": "active_passive" 00:27:05.198 } 00:27:05.198 } 00:27:05.198 ]' 00:27:05.198 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:05.198 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:05.198 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:05.198 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:05.198 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:05.198 04:17:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:05.198 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:05.198 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:05.198 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:05.198 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:05.198 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:05.455 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a8e031a2-bb06-44a6-bfda-e5cb4cb2dff9 00:27:05.455 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:05.455 04:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a8e031a2-bb06-44a6-bfda-e5cb4cb2dff9 00:27:05.713 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:05.713 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=24686a7f-610c-4fb0-9011-ca6e7f3659c7 00:27:05.713 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 24686a7f-610c-4fb0-9011-ca6e7f3659c7 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=75fbf5b6-985c-4936-84fb-86b1d2a1968a 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 75fbf5b6-985c-4936-84fb-86b1d2a1968a ]] 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 75fbf5b6-985c-4936-84fb-86b1d2a1968a 5120 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=75fbf5b6-985c-4936-84fb-86b1d2a1968a 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 75fbf5b6-985c-4936-84fb-86b1d2a1968a 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=75fbf5b6-985c-4936-84fb-86b1d2a1968a 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:05.970 04:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 75fbf5b6-985c-4936-84fb-86b1d2a1968a 00:27:06.228 04:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:06.228 { 00:27:06.228 "name": "75fbf5b6-985c-4936-84fb-86b1d2a1968a", 00:27:06.228 "aliases": [ 00:27:06.228 "lvs/basen1p0" 00:27:06.228 ], 00:27:06.228 "product_name": "Logical Volume", 00:27:06.228 "block_size": 4096, 00:27:06.228 "num_blocks": 5242880, 00:27:06.228 "uuid": "75fbf5b6-985c-4936-84fb-86b1d2a1968a", 00:27:06.228 "assigned_rate_limits": { 00:27:06.228 "rw_ios_per_sec": 0, 00:27:06.228 "rw_mbytes_per_sec": 0, 00:27:06.228 "r_mbytes_per_sec": 0, 00:27:06.228 "w_mbytes_per_sec": 0 00:27:06.228 }, 00:27:06.228 "claimed": false, 00:27:06.228 "zoned": false, 00:27:06.228 "supported_io_types": { 00:27:06.228 "read": true, 00:27:06.228 "write": true, 00:27:06.228 "unmap": true, 00:27:06.228 "flush": false, 00:27:06.228 "reset": true, 00:27:06.228 "nvme_admin": false, 00:27:06.228 "nvme_io": false, 00:27:06.228 "nvme_io_md": false, 00:27:06.228 "write_zeroes": true, 00:27:06.228 "zcopy": false, 00:27:06.228 "get_zone_info": false, 00:27:06.228 "zone_management": false, 00:27:06.228 "zone_append": false, 00:27:06.228 "compare": false, 00:27:06.228 "compare_and_write": false, 00:27:06.228 "abort": false, 00:27:06.228 "seek_hole": true, 00:27:06.228 "seek_data": true, 00:27:06.228 "copy": false, 00:27:06.228 "nvme_iov_md": false 00:27:06.228 }, 00:27:06.228 "driver_specific": { 00:27:06.228 "lvol": { 00:27:06.228 "lvol_store_uuid": "24686a7f-610c-4fb0-9011-ca6e7f3659c7", 00:27:06.228 "base_bdev": "basen1", 00:27:06.228 "thin_provision": true, 00:27:06.228 "num_allocated_clusters": 0, 00:27:06.228 "snapshot": false, 00:27:06.228 "clone": false, 00:27:06.228 "esnap_clone": false 00:27:06.228 } 00:27:06.228 } 00:27:06.228 } 00:27:06.228 ]' 00:27:06.228 04:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:06.228 04:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:06.228 04:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:06.228 04:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:06.228 04:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:06.228 04:17:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:06.228 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:06.228 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:06.228 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:06.486 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:06.486 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:06.486 04:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:06.745 04:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:06.745 04:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:06.745 04:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 75fbf5b6-985c-4936-84fb-86b1d2a1968a -c cachen1p0 --l2p_dram_limit 2 00:27:06.745 [2024-12-06 04:17:54.235734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.745 [2024-12-06 04:17:54.235779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:06.745 [2024-12-06 04:17:54.235792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:06.745 [2024-12-06 04:17:54.235799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.745 [2024-12-06 04:17:54.235848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.745 [2024-12-06 04:17:54.235856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:06.745 [2024-12-06 04:17:54.235864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:06.745 [2024-12-06 04:17:54.235870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.745 [2024-12-06 04:17:54.235886] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:06.745 [2024-12-06 04:17:54.236436] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:06.745 [2024-12-06 04:17:54.236456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.745 [2024-12-06 04:17:54.236462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:06.745 [2024-12-06 04:17:54.236471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.572 ms 00:27:06.745 [2024-12-06 04:17:54.236476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.745 [2024-12-06 04:17:54.236558] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 3cd2e71b-32c0-46ae-b13c-98922401b3c9 00:27:06.745 [2024-12-06 04:17:54.237519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.745 [2024-12-06 04:17:54.237548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:06.745 [2024-12-06 04:17:54.237556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:06.745 [2024-12-06 04:17:54.237563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.745 [2024-12-06 04:17:54.242253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.745 [2024-12-06 04:17:54.242383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:06.745 [2024-12-06 04:17:54.242395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.658 ms 00:27:06.745 [2024-12-06 04:17:54.242403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.745 [2024-12-06 04:17:54.242438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.745 [2024-12-06 04:17:54.242446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:06.745 [2024-12-06 04:17:54.242453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:06.745 [2024-12-06 04:17:54.242469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.745 [2024-12-06 04:17:54.242509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.745 [2024-12-06 04:17:54.242518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:06.745 [2024-12-06 04:17:54.242526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:06.745 [2024-12-06 04:17:54.242534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.745 [2024-12-06 04:17:54.242551] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:06.745 [2024-12-06 04:17:54.245400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.745 [2024-12-06 04:17:54.245420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:06.745 [2024-12-06 04:17:54.245429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.851 ms 00:27:06.745 [2024-12-06 04:17:54.245435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.745 [2024-12-06 04:17:54.245458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.745 [2024-12-06 04:17:54.245464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:06.745 [2024-12-06 04:17:54.245471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:06.745 [2024-12-06 04:17:54.245477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.745 [2024-12-06 04:17:54.245497] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:06.745 [2024-12-06 04:17:54.245605] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:06.745 [2024-12-06 04:17:54.245617] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:06.745 [2024-12-06 04:17:54.245625] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:06.745 [2024-12-06 04:17:54.245633] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:06.745 [2024-12-06 04:17:54.245641] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:06.745 [2024-12-06 04:17:54.245648] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:06.745 [2024-12-06 04:17:54.245654] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:06.745 [2024-12-06 04:17:54.245663] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:06.745 [2024-12-06 04:17:54.245669] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:06.745 [2024-12-06 04:17:54.245676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.745 [2024-12-06 04:17:54.245681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:06.745 [2024-12-06 04:17:54.245688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.180 ms 00:27:06.745 [2024-12-06 04:17:54.245693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.745 [2024-12-06 04:17:54.245857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.745 [2024-12-06 04:17:54.245886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:06.745 [2024-12-06 04:17:54.245903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.148 ms 00:27:06.745 [2024-12-06 04:17:54.245918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.745 [2024-12-06 04:17:54.246111] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:06.745 [2024-12-06 04:17:54.246129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:06.745 [2024-12-06 04:17:54.246146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:06.745 [2024-12-06 04:17:54.246184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.745 [2024-12-06 04:17:54.246271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:06.745 [2024-12-06 04:17:54.246289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:06.745 [2024-12-06 04:17:54.246323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:06.745 [2024-12-06 04:17:54.246341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:06.745 [2024-12-06 04:17:54.246359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:06.745 [2024-12-06 04:17:54.246373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.745 [2024-12-06 04:17:54.246388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:06.745 [2024-12-06 04:17:54.246419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:06.745 [2024-12-06 04:17:54.246437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.745 [2024-12-06 04:17:54.246451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:06.746 [2024-12-06 04:17:54.246476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:06.746 [2024-12-06 04:17:54.246491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.746 [2024-12-06 04:17:54.246524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:06.746 [2024-12-06 04:17:54.246541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:06.746 [2024-12-06 04:17:54.246556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.746 [2024-12-06 04:17:54.246571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:06.746 [2024-12-06 04:17:54.246586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:06.746 [2024-12-06 04:17:54.246627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:06.746 [2024-12-06 04:17:54.246645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:06.746 [2024-12-06 04:17:54.246660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:06.746 [2024-12-06 04:17:54.246675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:06.746 [2024-12-06 04:17:54.246689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:06.746 [2024-12-06 04:17:54.246704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:06.746 [2024-12-06 04:17:54.246748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:06.746 [2024-12-06 04:17:54.246767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:06.746 [2024-12-06 04:17:54.246848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:06.746 [2024-12-06 04:17:54.246866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:06.746 [2024-12-06 04:17:54.246880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:06.746 [2024-12-06 04:17:54.246897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:06.746 [2024-12-06 04:17:54.246911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.746 [2024-12-06 04:17:54.246927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:06.746 [2024-12-06 04:17:54.246941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:06.746 [2024-12-06 04:17:54.246957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.746 [2024-12-06 04:17:54.246988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:06.746 [2024-12-06 04:17:54.247007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:06.746 [2024-12-06 04:17:54.247021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.746 [2024-12-06 04:17:54.247036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:06.746 [2024-12-06 04:17:54.247051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:06.746 [2024-12-06 04:17:54.247066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.746 [2024-12-06 04:17:54.247080] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:06.746 [2024-12-06 04:17:54.247096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:06.746 [2024-12-06 04:17:54.247111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:06.746 [2024-12-06 04:17:54.247143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.746 [2024-12-06 04:17:54.247161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:06.746 [2024-12-06 04:17:54.247178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:06.746 [2024-12-06 04:17:54.247192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:06.746 [2024-12-06 04:17:54.247207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:06.746 [2024-12-06 04:17:54.247222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:06.746 [2024-12-06 04:17:54.247237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:06.746 [2024-12-06 04:17:54.247253] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:06.746 [2024-12-06 04:17:54.247281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:06.746 [2024-12-06 04:17:54.247335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:06.746 [2024-12-06 04:17:54.247360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:06.746 [2024-12-06 04:17:54.247382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:06.746 [2024-12-06 04:17:54.247405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:06.746 [2024-12-06 04:17:54.247427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:06.746 [2024-12-06 04:17:54.247477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:06.746 [2024-12-06 04:17:54.247500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:06.746 [2024-12-06 04:17:54.247523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:06.746 [2024-12-06 04:17:54.247545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:06.746 [2024-12-06 04:17:54.247569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:06.746 [2024-12-06 04:17:54.247607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:06.746 [2024-12-06 04:17:54.247696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:06.746 [2024-12-06 04:17:54.247753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:06.746 [2024-12-06 04:17:54.247804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:06.746 [2024-12-06 04:17:54.247847] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:06.746 [2024-12-06 04:17:54.247874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:06.746 [2024-12-06 04:17:54.247897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:06.746 [2024-12-06 04:17:54.247955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:06.746 [2024-12-06 04:17:54.247977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:06.746 [2024-12-06 04:17:54.248024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:06.746 [2024-12-06 04:17:54.248048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.746 [2024-12-06 04:17:54.248064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:06.746 [2024-12-06 04:17:54.248102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.991 ms 00:27:06.746 [2024-12-06 04:17:54.248121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.746 [2024-12-06 04:17:54.248177] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:06.746 [2024-12-06 04:17:54.248236] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:09.277 [2024-12-06 04:17:56.783834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.277 [2024-12-06 04:17:56.784055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:09.277 [2024-12-06 04:17:56.784118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2535.647 ms 00:27:09.277 [2024-12-06 04:17:56.784145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.809025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.809182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:09.536 [2024-12-06 04:17:56.809235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.662 ms 00:27:09.536 [2024-12-06 04:17:56.809262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.809351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.809379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:09.536 [2024-12-06 04:17:56.809399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:09.536 [2024-12-06 04:17:56.809424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.840046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.840173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:09.536 [2024-12-06 04:17:56.840227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.576 ms 00:27:09.536 [2024-12-06 04:17:56.840252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.840292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.840392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:09.536 [2024-12-06 04:17:56.840412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:09.536 [2024-12-06 04:17:56.840457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.840833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.840924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:09.536 [2024-12-06 04:17:56.840976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:27:09.536 [2024-12-06 04:17:56.841002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.841088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.841114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:09.536 [2024-12-06 04:17:56.841161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:09.536 [2024-12-06 04:17:56.841187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.854984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.855091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:09.536 [2024-12-06 04:17:56.855145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.716 ms 00:27:09.536 [2024-12-06 04:17:56.855170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.882235] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:09.536 [2024-12-06 04:17:56.883181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.883281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:09.536 [2024-12-06 04:17:56.883338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.924 ms 00:27:09.536 [2024-12-06 04:17:56.883362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.904940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.905073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:09.536 [2024-12-06 04:17:56.905130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.531 ms 00:27:09.536 [2024-12-06 04:17:56.905153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.905243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.905271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:09.536 [2024-12-06 04:17:56.905295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:27:09.536 [2024-12-06 04:17:56.905314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.927946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.928053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:09.536 [2024-12-06 04:17:56.928109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.575 ms 00:27:09.536 [2024-12-06 04:17:56.928134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.950602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.950728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:09.536 [2024-12-06 04:17:56.950747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.389 ms 00:27:09.536 [2024-12-06 04:17:56.950755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:56.951301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:56.951318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:09.536 [2024-12-06 04:17:56.951329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.513 ms 00:27:09.536 [2024-12-06 04:17:56.951338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:57.028296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:57.028346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:09.536 [2024-12-06 04:17:57.028365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 76.918 ms 00:27:09.536 [2024-12-06 04:17:57.028373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.536 [2024-12-06 04:17:57.052132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.536 [2024-12-06 04:17:57.052171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:09.536 [2024-12-06 04:17:57.052186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.689 ms 00:27:09.536 [2024-12-06 04:17:57.052194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.794 [2024-12-06 04:17:57.074905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.795 [2024-12-06 04:17:57.074939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:09.795 [2024-12-06 04:17:57.074951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.684 ms 00:27:09.795 [2024-12-06 04:17:57.074959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.795 [2024-12-06 04:17:57.097363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.795 [2024-12-06 04:17:57.097396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:09.795 [2024-12-06 04:17:57.097408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.378 ms 00:27:09.795 [2024-12-06 04:17:57.097415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.795 [2024-12-06 04:17:57.097444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.795 [2024-12-06 04:17:57.097453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:09.795 [2024-12-06 04:17:57.097466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:09.795 [2024-12-06 04:17:57.097473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.795 [2024-12-06 04:17:57.097546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.795 [2024-12-06 04:17:57.097557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:09.795 [2024-12-06 04:17:57.097567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:09.795 [2024-12-06 04:17:57.097574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.795 [2024-12-06 04:17:57.098387] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2862.268 ms, result 0 00:27:09.795 { 00:27:09.795 "name": "ftl", 00:27:09.795 "uuid": "3cd2e71b-32c0-46ae-b13c-98922401b3c9" 00:27:09.795 } 00:27:09.795 04:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:09.795 [2024-12-06 04:17:57.301876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.795 04:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:10.052 04:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:10.309 [2024-12-06 04:17:57.698225] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:10.309 04:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:10.567 [2024-12-06 04:17:57.890614] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:10.567 04:17:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:10.824 Fill FTL, iteration 1 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80433 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80433 /var/tmp/spdk.tgt.sock 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80433 ']' 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:10.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.824 04:17:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:10.824 [2024-12-06 04:17:58.314479] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:27:10.824 [2024-12-06 04:17:58.314782] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80433 ] 00:27:11.081 [2024-12-06 04:17:58.468946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.081 [2024-12-06 04:17:58.564742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.645 04:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:11.645 04:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:11.645 04:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:11.903 ftln1 00:27:11.903 04:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:11.903 04:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:12.161 04:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:12.161 04:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80433 00:27:12.161 04:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80433 ']' 00:27:12.161 04:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80433 00:27:12.161 04:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:12.161 04:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.161 04:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80433 00:27:12.161 killing process with pid 80433 00:27:12.161 04:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:12.161 04:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:12.161 04:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80433' 00:27:12.161 04:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80433 00:27:12.161 04:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80433 00:27:13.562 04:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:13.562 04:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:13.820 [2024-12-06 04:18:01.114644] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:27:13.820 [2024-12-06 04:18:01.114777] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80475 ] 00:27:13.820 [2024-12-06 04:18:01.271805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.077 [2024-12-06 04:18:01.350048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.450  [2024-12-06T04:18:03.913Z] Copying: 260/1024 [MB] (260 MBps) [2024-12-06T04:18:04.848Z] Copying: 523/1024 [MB] (263 MBps) [2024-12-06T04:18:05.780Z] Copying: 786/1024 [MB] (263 MBps) [2024-12-06T04:18:06.346Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:27:18.819 00:27:18.819 Calculate MD5 checksum, iteration 1 00:27:18.819 04:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:18.819 04:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:18.819 04:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:18.819 04:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:18.819 04:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:18.819 04:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:18.819 04:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:18.819 04:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:18.820 [2024-12-06 04:18:06.224331] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:27:18.820 [2024-12-06 04:18:06.224444] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80528 ] 00:27:19.078 [2024-12-06 04:18:06.378553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.078 [2024-12-06 04:18:06.455155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.453  [2024-12-06T04:18:08.238Z] Copying: 685/1024 [MB] (685 MBps) [2024-12-06T04:18:08.806Z] Copying: 1024/1024 [MB] (average 694 MBps) 00:27:21.279 00:27:21.279 04:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:21.279 04:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:23.181 04:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:23.181 Fill FTL, iteration 2 00:27:23.181 04:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=de660a71cfc274d8d5d2a8a2e812e038 00:27:23.181 04:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:23.181 04:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:23.181 04:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:23.181 04:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:23.181 04:18:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:23.181 04:18:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:23.181 04:18:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:23.181 04:18:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:23.181 04:18:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:23.440 [2024-12-06 04:18:10.760695] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:27:23.440 [2024-12-06 04:18:10.760826] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80578 ] 00:27:23.440 [2024-12-06 04:18:10.914552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.698 [2024-12-06 04:18:10.994667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.073  [2024-12-06T04:18:13.535Z] Copying: 261/1024 [MB] (261 MBps) [2024-12-06T04:18:14.469Z] Copying: 515/1024 [MB] (254 MBps) [2024-12-06T04:18:15.402Z] Copying: 771/1024 [MB] (256 MBps) [2024-12-06T04:18:15.967Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:27:28.440 00:27:28.440 Calculate MD5 checksum, iteration 2 00:27:28.440 04:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:28.440 04:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:28.440 04:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:28.440 04:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:28.440 04:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:28.440 04:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:28.440 04:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:28.440 04:18:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:28.440 [2024-12-06 04:18:15.928682] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:27:28.440 [2024-12-06 04:18:15.929024] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80636 ] 00:27:28.697 [2024-12-06 04:18:16.084322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.697 [2024-12-06 04:18:16.162024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.068  [2024-12-06T04:18:18.160Z] Copying: 675/1024 [MB] (675 MBps) [2024-12-06T04:18:19.095Z] Copying: 1024/1024 [MB] (average 665 MBps) 00:27:31.568 00:27:31.568 04:18:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:31.568 04:18:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:33.468 04:18:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:33.468 04:18:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a2af19ac5e4fe7e6021bc3746742d25e 00:27:33.468 04:18:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:33.468 04:18:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:33.468 04:18:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:33.726 [2024-12-06 04:18:21.160265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.726 [2024-12-06 04:18:21.160311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:33.726 [2024-12-06 04:18:21.160323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:33.726 [2024-12-06 04:18:21.160330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.726 [2024-12-06 04:18:21.160349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.726 [2024-12-06 04:18:21.160357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:33.726 [2024-12-06 04:18:21.160365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:33.726 [2024-12-06 04:18:21.160370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.726 [2024-12-06 04:18:21.160401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.726 [2024-12-06 04:18:21.160408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:33.726 [2024-12-06 04:18:21.160414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:33.726 [2024-12-06 04:18:21.160420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.726 [2024-12-06 04:18:21.160470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.193 ms, result 0 00:27:33.726 true 00:27:33.726 04:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:33.984 { 00:27:33.984 "name": "ftl", 00:27:33.984 "properties": [ 00:27:33.984 { 00:27:33.984 "name": "superblock_version", 00:27:33.984 "value": 5, 00:27:33.984 "read-only": true 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "name": "base_device", 00:27:33.984 "bands": [ 00:27:33.984 { 00:27:33.984 "id": 0, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 1, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 2, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 3, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 4, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 5, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 6, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 7, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 8, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 9, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 10, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 11, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 12, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 13, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 14, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 15, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 16, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 17, 00:27:33.984 "state": "FREE", 00:27:33.984 "validity": 0.0 00:27:33.984 } 00:27:33.984 ], 00:27:33.984 "read-only": true 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "name": "cache_device", 00:27:33.984 "type": "bdev", 00:27:33.984 "chunks": [ 00:27:33.984 { 00:27:33.984 "id": 0, 00:27:33.984 "state": "INACTIVE", 00:27:33.984 "utilization": 0.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 1, 00:27:33.984 "state": "CLOSED", 00:27:33.984 "utilization": 1.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 2, 00:27:33.984 "state": "CLOSED", 00:27:33.984 "utilization": 1.0 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 3, 00:27:33.984 "state": "OPEN", 00:27:33.984 "utilization": 0.001953125 00:27:33.984 }, 00:27:33.984 { 00:27:33.984 "id": 4, 00:27:33.984 "state": "OPEN", 00:27:33.984 "utilization": 0.0 00:27:33.985 } 00:27:33.985 ], 00:27:33.985 "read-only": true 00:27:33.985 }, 00:27:33.985 { 00:27:33.985 "name": "verbose_mode", 00:27:33.985 "value": true, 00:27:33.985 "unit": "", 00:27:33.985 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:33.985 }, 00:27:33.985 { 00:27:33.985 "name": "prep_upgrade_on_shutdown", 00:27:33.985 "value": false, 00:27:33.985 "unit": "", 00:27:33.985 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:33.985 } 00:27:33.985 ] 00:27:33.985 } 00:27:33.985 04:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:34.270 [2024-12-06 04:18:21.564611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.270 [2024-12-06 04:18:21.564654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:34.270 [2024-12-06 04:18:21.564664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:34.270 [2024-12-06 04:18:21.564669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.270 [2024-12-06 04:18:21.564687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.270 [2024-12-06 04:18:21.564693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:34.270 [2024-12-06 04:18:21.564700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:34.270 [2024-12-06 04:18:21.564705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.270 [2024-12-06 04:18:21.564729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.270 [2024-12-06 04:18:21.564736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:34.270 [2024-12-06 04:18:21.564741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:34.270 [2024-12-06 04:18:21.564747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.270 [2024-12-06 04:18:21.564792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.173 ms, result 0 00:27:34.270 true 00:27:34.270 04:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:34.270 04:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:34.270 04:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:34.585 04:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:34.585 04:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:34.585 04:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:34.585 [2024-12-06 04:18:21.976920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.585 [2024-12-06 04:18:21.976963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:34.585 [2024-12-06 04:18:21.976972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:34.585 [2024-12-06 04:18:21.976978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.585 [2024-12-06 04:18:21.976995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.585 [2024-12-06 04:18:21.977001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:34.585 [2024-12-06 04:18:21.977007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:34.585 [2024-12-06 04:18:21.977012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.585 [2024-12-06 04:18:21.977027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.585 [2024-12-06 04:18:21.977033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:34.585 [2024-12-06 04:18:21.977039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:34.585 [2024-12-06 04:18:21.977044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.585 [2024-12-06 04:18:21.977087] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.160 ms, result 0 00:27:34.585 true 00:27:34.585 04:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:34.853 { 00:27:34.853 "name": "ftl", 00:27:34.853 "properties": [ 00:27:34.853 { 00:27:34.853 "name": "superblock_version", 00:27:34.853 "value": 5, 00:27:34.853 "read-only": true 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "name": "base_device", 00:27:34.853 "bands": [ 00:27:34.853 { 00:27:34.853 "id": 0, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 1, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 2, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 3, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 4, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 5, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 6, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 7, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 8, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 9, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 10, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 11, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 12, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 13, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 14, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 15, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 16, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 17, 00:27:34.853 "state": "FREE", 00:27:34.853 "validity": 0.0 00:27:34.853 } 00:27:34.853 ], 00:27:34.853 "read-only": true 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "name": "cache_device", 00:27:34.853 "type": "bdev", 00:27:34.853 "chunks": [ 00:27:34.853 { 00:27:34.853 "id": 0, 00:27:34.853 "state": "INACTIVE", 00:27:34.853 "utilization": 0.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 1, 00:27:34.853 "state": "CLOSED", 00:27:34.853 "utilization": 1.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 2, 00:27:34.853 "state": "CLOSED", 00:27:34.853 "utilization": 1.0 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 3, 00:27:34.853 "state": "OPEN", 00:27:34.853 "utilization": 0.001953125 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "id": 4, 00:27:34.853 "state": "OPEN", 00:27:34.853 "utilization": 0.0 00:27:34.853 } 00:27:34.853 ], 00:27:34.853 "read-only": true 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "name": "verbose_mode", 00:27:34.853 "value": true, 00:27:34.853 "unit": "", 00:27:34.853 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:34.853 }, 00:27:34.853 { 00:27:34.853 "name": "prep_upgrade_on_shutdown", 00:27:34.853 "value": true, 00:27:34.853 "unit": "", 00:27:34.853 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:34.853 } 00:27:34.853 ] 00:27:34.853 } 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80321 ]] 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80321 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80321 ']' 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80321 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80321 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80321' 00:27:34.853 killing process with pid 80321 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80321 00:27:34.853 04:18:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80321 00:27:35.418 [2024-12-06 04:18:22.752680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:35.418 [2024-12-06 04:18:22.763039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.418 [2024-12-06 04:18:22.763074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:35.418 [2024-12-06 04:18:22.763083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:35.418 [2024-12-06 04:18:22.763089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.418 [2024-12-06 04:18:22.763107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:35.418 [2024-12-06 04:18:22.765184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.418 [2024-12-06 04:18:22.765208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:35.418 [2024-12-06 04:18:22.765217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.066 ms 00:27:35.418 [2024-12-06 04:18:22.765224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.529 [2024-12-06 04:18:30.716281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.529 [2024-12-06 04:18:30.716351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:43.529 [2024-12-06 04:18:30.716371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7951.008 ms 00:27:43.529 [2024-12-06 04:18:30.716379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.529 [2024-12-06 04:18:30.717527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.529 [2024-12-06 04:18:30.717553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:43.529 [2024-12-06 04:18:30.717562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.132 ms 00:27:43.529 [2024-12-06 04:18:30.717570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.529 [2024-12-06 04:18:30.718724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.529 [2024-12-06 04:18:30.718758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:43.529 [2024-12-06 04:18:30.718768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.129 ms 00:27:43.529 [2024-12-06 04:18:30.718780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.529 [2024-12-06 04:18:30.728089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.529 [2024-12-06 04:18:30.728122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:43.529 [2024-12-06 04:18:30.728131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.272 ms 00:27:43.529 [2024-12-06 04:18:30.728138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.529 [2024-12-06 04:18:30.734022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.529 [2024-12-06 04:18:30.734057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:43.529 [2024-12-06 04:18:30.734067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.853 ms 00:27:43.529 [2024-12-06 04:18:30.734075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.529 [2024-12-06 04:18:30.734155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.529 [2024-12-06 04:18:30.734169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:43.529 [2024-12-06 04:18:30.734178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:27:43.529 [2024-12-06 04:18:30.734185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.529 [2024-12-06 04:18:30.743762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.529 [2024-12-06 04:18:30.743804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:43.529 [2024-12-06 04:18:30.743816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.560 ms 00:27:43.529 [2024-12-06 04:18:30.743824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.529 [2024-12-06 04:18:30.753056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.529 [2024-12-06 04:18:30.753086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:43.529 [2024-12-06 04:18:30.753095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.198 ms 00:27:43.529 [2024-12-06 04:18:30.753102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.529 [2024-12-06 04:18:30.761751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.529 [2024-12-06 04:18:30.761781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:43.529 [2024-12-06 04:18:30.761790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.619 ms 00:27:43.529 [2024-12-06 04:18:30.761797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.529 [2024-12-06 04:18:30.770625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.529 [2024-12-06 04:18:30.770656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:43.529 [2024-12-06 04:18:30.770664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.768 ms 00:27:43.529 [2024-12-06 04:18:30.770672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.529 [2024-12-06 04:18:30.770700] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:43.529 [2024-12-06 04:18:30.770731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:43.529 [2024-12-06 04:18:30.770741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:43.530 [2024-12-06 04:18:30.770749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:43.530 [2024-12-06 04:18:30.770757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:43.530 [2024-12-06 04:18:30.770873] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:43.530 [2024-12-06 04:18:30.770880] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 3cd2e71b-32c0-46ae-b13c-98922401b3c9 00:27:43.530 [2024-12-06 04:18:30.770887] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:43.530 [2024-12-06 04:18:30.770894] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:43.530 [2024-12-06 04:18:30.770901] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:43.530 [2024-12-06 04:18:30.770909] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:43.530 [2024-12-06 04:18:30.770918] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:43.530 [2024-12-06 04:18:30.770925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:43.530 [2024-12-06 04:18:30.770935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:43.530 [2024-12-06 04:18:30.770941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:43.530 [2024-12-06 04:18:30.770947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:43.530 [2024-12-06 04:18:30.770954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.530 [2024-12-06 04:18:30.770962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:43.530 [2024-12-06 04:18:30.770970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:27:43.530 [2024-12-06 04:18:30.770976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.783238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.530 [2024-12-06 04:18:30.783270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:43.530 [2024-12-06 04:18:30.783284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.246 ms 00:27:43.530 [2024-12-06 04:18:30.783292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.783628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.530 [2024-12-06 04:18:30.783646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:43.530 [2024-12-06 04:18:30.783655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:27:43.530 [2024-12-06 04:18:30.783662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.824570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.530 [2024-12-06 04:18:30.824614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:43.530 [2024-12-06 04:18:30.824625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.530 [2024-12-06 04:18:30.824634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.824670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.530 [2024-12-06 04:18:30.824678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:43.530 [2024-12-06 04:18:30.824690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.530 [2024-12-06 04:18:30.824697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.824783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.530 [2024-12-06 04:18:30.824794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:43.530 [2024-12-06 04:18:30.824804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.530 [2024-12-06 04:18:30.824813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.824828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.530 [2024-12-06 04:18:30.824836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:43.530 [2024-12-06 04:18:30.824843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.530 [2024-12-06 04:18:30.824850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.901260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.530 [2024-12-06 04:18:30.901305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:43.530 [2024-12-06 04:18:30.901320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.530 [2024-12-06 04:18:30.901328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.964113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.530 [2024-12-06 04:18:30.964160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:43.530 [2024-12-06 04:18:30.964170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.530 [2024-12-06 04:18:30.964177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.964240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.530 [2024-12-06 04:18:30.964249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:43.530 [2024-12-06 04:18:30.964257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.530 [2024-12-06 04:18:30.964269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.964321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.530 [2024-12-06 04:18:30.964330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:43.530 [2024-12-06 04:18:30.964338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.530 [2024-12-06 04:18:30.964345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.964428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.530 [2024-12-06 04:18:30.964438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:43.530 [2024-12-06 04:18:30.964446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.530 [2024-12-06 04:18:30.964453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.964488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.530 [2024-12-06 04:18:30.964496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:43.530 [2024-12-06 04:18:30.964504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.530 [2024-12-06 04:18:30.964511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.964545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.530 [2024-12-06 04:18:30.964554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:43.530 [2024-12-06 04:18:30.964562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.530 [2024-12-06 04:18:30.964569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.964614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.530 [2024-12-06 04:18:30.964624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:43.530 [2024-12-06 04:18:30.964632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.530 [2024-12-06 04:18:30.964639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.530 [2024-12-06 04:18:30.964768] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8201.658 ms, result 0 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80823 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80823 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80823 ']' 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.724 04:18:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:47.724 [2024-12-06 04:18:35.038038] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:27:47.724 [2024-12-06 04:18:35.038164] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80823 ] 00:27:47.724 [2024-12-06 04:18:35.190974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.982 [2024-12-06 04:18:35.269369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.548 [2024-12-06 04:18:35.842442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:48.548 [2024-12-06 04:18:35.842519] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:48.548 [2024-12-06 04:18:35.985714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.548 [2024-12-06 04:18:35.985779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:48.548 [2024-12-06 04:18:35.985791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:48.548 [2024-12-06 04:18:35.985797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.548 [2024-12-06 04:18:35.985844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.548 [2024-12-06 04:18:35.985852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:48.548 [2024-12-06 04:18:35.985859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:48.548 [2024-12-06 04:18:35.985865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.548 [2024-12-06 04:18:35.985885] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:48.548 [2024-12-06 04:18:35.986468] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:48.548 [2024-12-06 04:18:35.986488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.548 [2024-12-06 04:18:35.986494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:48.548 [2024-12-06 04:18:35.986500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.609 ms 00:27:48.548 [2024-12-06 04:18:35.986506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.548 [2024-12-06 04:18:35.987455] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:48.548 [2024-12-06 04:18:35.997135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.548 [2024-12-06 04:18:35.997170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:48.548 [2024-12-06 04:18:35.997183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.681 ms 00:27:48.549 [2024-12-06 04:18:35.997189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-12-06 04:18:35.997237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.549 [2024-12-06 04:18:35.997245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:48.549 [2024-12-06 04:18:35.997251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:48.549 [2024-12-06 04:18:35.997257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-12-06 04:18:36.001565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.549 [2024-12-06 04:18:36.001593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:48.549 [2024-12-06 04:18:36.001600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.259 ms 00:27:48.549 [2024-12-06 04:18:36.001606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-12-06 04:18:36.001648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.549 [2024-12-06 04:18:36.001656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:48.549 [2024-12-06 04:18:36.001662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:48.549 [2024-12-06 04:18:36.001668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-12-06 04:18:36.001703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.549 [2024-12-06 04:18:36.001712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:48.549 [2024-12-06 04:18:36.001729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:48.549 [2024-12-06 04:18:36.001735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-12-06 04:18:36.001752] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:48.549 [2024-12-06 04:18:36.004475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.549 [2024-12-06 04:18:36.004501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:48.549 [2024-12-06 04:18:36.004508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.728 ms 00:27:48.549 [2024-12-06 04:18:36.004517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-12-06 04:18:36.004540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.549 [2024-12-06 04:18:36.004548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:48.549 [2024-12-06 04:18:36.004553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:48.549 [2024-12-06 04:18:36.004559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-12-06 04:18:36.004575] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:48.549 [2024-12-06 04:18:36.004592] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:48.549 [2024-12-06 04:18:36.004618] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:48.549 [2024-12-06 04:18:36.004630] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:48.549 [2024-12-06 04:18:36.004709] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:48.549 [2024-12-06 04:18:36.004725] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:48.549 [2024-12-06 04:18:36.004733] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:48.549 [2024-12-06 04:18:36.004741] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:48.549 [2024-12-06 04:18:36.004749] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:48.549 [2024-12-06 04:18:36.004758] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:48.549 [2024-12-06 04:18:36.004764] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:48.549 [2024-12-06 04:18:36.004770] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:48.549 [2024-12-06 04:18:36.004775] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:48.549 [2024-12-06 04:18:36.004781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.549 [2024-12-06 04:18:36.004787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:48.549 [2024-12-06 04:18:36.004792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:27:48.549 [2024-12-06 04:18:36.004798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-12-06 04:18:36.004864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.549 [2024-12-06 04:18:36.004870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:48.549 [2024-12-06 04:18:36.004877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:48.549 [2024-12-06 04:18:36.004883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-12-06 04:18:36.004960] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:48.549 [2024-12-06 04:18:36.004972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:48.549 [2024-12-06 04:18:36.004980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:48.549 [2024-12-06 04:18:36.004986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:48.549 [2024-12-06 04:18:36.004991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:48.549 [2024-12-06 04:18:36.004996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:48.549 [2024-12-06 04:18:36.005002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:48.549 [2024-12-06 04:18:36.005007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:48.549 [2024-12-06 04:18:36.005012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:48.549 [2024-12-06 04:18:36.005018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:48.549 [2024-12-06 04:18:36.005023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:48.549 [2024-12-06 04:18:36.005028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:48.549 [2024-12-06 04:18:36.005033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:48.549 [2024-12-06 04:18:36.005038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:48.549 [2024-12-06 04:18:36.005043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:48.549 [2024-12-06 04:18:36.005050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:48.549 [2024-12-06 04:18:36.005055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:48.549 [2024-12-06 04:18:36.005060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:48.549 [2024-12-06 04:18:36.005065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:48.549 [2024-12-06 04:18:36.005071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:48.549 [2024-12-06 04:18:36.005077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:48.549 [2024-12-06 04:18:36.005081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:48.549 [2024-12-06 04:18:36.005086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:48.549 [2024-12-06 04:18:36.005097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:48.549 [2024-12-06 04:18:36.005102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:48.549 [2024-12-06 04:18:36.005107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:48.549 [2024-12-06 04:18:36.005111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:48.549 [2024-12-06 04:18:36.005116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:48.549 [2024-12-06 04:18:36.005121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:48.549 [2024-12-06 04:18:36.005126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:48.549 [2024-12-06 04:18:36.005131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:48.549 [2024-12-06 04:18:36.005136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:48.549 [2024-12-06 04:18:36.005141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:48.549 [2024-12-06 04:18:36.005145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:48.549 [2024-12-06 04:18:36.005150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:48.549 [2024-12-06 04:18:36.005155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:48.549 [2024-12-06 04:18:36.005159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:48.549 [2024-12-06 04:18:36.005164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:48.549 [2024-12-06 04:18:36.005170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:48.549 [2024-12-06 04:18:36.005175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:48.549 [2024-12-06 04:18:36.005180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:48.549 [2024-12-06 04:18:36.005184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:48.549 [2024-12-06 04:18:36.005189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:48.549 [2024-12-06 04:18:36.005194] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:48.549 [2024-12-06 04:18:36.005200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:48.549 [2024-12-06 04:18:36.005206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:48.549 [2024-12-06 04:18:36.005211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:48.549 [2024-12-06 04:18:36.005268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:48.549 [2024-12-06 04:18:36.005273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:48.549 [2024-12-06 04:18:36.005279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:48.549 [2024-12-06 04:18:36.005285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:48.549 [2024-12-06 04:18:36.005290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:48.549 [2024-12-06 04:18:36.005295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:48.549 [2024-12-06 04:18:36.005301] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:48.549 [2024-12-06 04:18:36.005308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:48.549 [2024-12-06 04:18:36.005315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:48.549 [2024-12-06 04:18:36.005320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:48.549 [2024-12-06 04:18:36.005326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:48.550 [2024-12-06 04:18:36.005331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:48.550 [2024-12-06 04:18:36.005337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:48.550 [2024-12-06 04:18:36.005342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:48.550 [2024-12-06 04:18:36.005348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:48.550 [2024-12-06 04:18:36.005353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:48.550 [2024-12-06 04:18:36.005359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:48.550 [2024-12-06 04:18:36.005364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:48.550 [2024-12-06 04:18:36.005370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:48.550 [2024-12-06 04:18:36.005375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:48.550 [2024-12-06 04:18:36.005381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:48.550 [2024-12-06 04:18:36.005386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:48.550 [2024-12-06 04:18:36.005392] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:48.550 [2024-12-06 04:18:36.005398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:48.550 [2024-12-06 04:18:36.005404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:48.550 [2024-12-06 04:18:36.005409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:48.550 [2024-12-06 04:18:36.005414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:48.550 [2024-12-06 04:18:36.005420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:48.550 [2024-12-06 04:18:36.005425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.550 [2024-12-06 04:18:36.005431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:48.550 [2024-12-06 04:18:36.005436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.518 ms 00:27:48.550 [2024-12-06 04:18:36.005442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.550 [2024-12-06 04:18:36.005478] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:48.550 [2024-12-06 04:18:36.005486] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:51.079 [2024-12-06 04:18:38.210428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.210496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:51.079 [2024-12-06 04:18:38.210511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2204.940 ms 00:27:51.079 [2024-12-06 04:18:38.210520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.235322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.235370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:51.079 [2024-12-06 04:18:38.235382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.592 ms 00:27:51.079 [2024-12-06 04:18:38.235390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.235488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.235503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:51.079 [2024-12-06 04:18:38.235512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:51.079 [2024-12-06 04:18:38.235520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.265887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.265929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:51.079 [2024-12-06 04:18:38.265943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.330 ms 00:27:51.079 [2024-12-06 04:18:38.265951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.265988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.265997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:51.079 [2024-12-06 04:18:38.266005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:51.079 [2024-12-06 04:18:38.266013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.266356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.266380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:51.079 [2024-12-06 04:18:38.266389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:27:51.079 [2024-12-06 04:18:38.266396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.266443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.266451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:51.079 [2024-12-06 04:18:38.266469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:51.079 [2024-12-06 04:18:38.266476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.280115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.280148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:51.079 [2024-12-06 04:18:38.280158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.617 ms 00:27:51.079 [2024-12-06 04:18:38.280165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.302429] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:51.079 [2024-12-06 04:18:38.302494] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:51.079 [2024-12-06 04:18:38.302509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.302518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:51.079 [2024-12-06 04:18:38.302529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.226 ms 00:27:51.079 [2024-12-06 04:18:38.302537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.316553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.316605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:51.079 [2024-12-06 04:18:38.316619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.961 ms 00:27:51.079 [2024-12-06 04:18:38.316626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.328206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.328249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:51.079 [2024-12-06 04:18:38.328259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.526 ms 00:27:51.079 [2024-12-06 04:18:38.328266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.339354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.339393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:51.079 [2024-12-06 04:18:38.339404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.050 ms 00:27:51.079 [2024-12-06 04:18:38.339411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.340061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.340087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:51.079 [2024-12-06 04:18:38.340097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.550 ms 00:27:51.079 [2024-12-06 04:18:38.340104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.393773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.393827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:51.079 [2024-12-06 04:18:38.393840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.649 ms 00:27:51.079 [2024-12-06 04:18:38.393848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.404306] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:51.079 [2024-12-06 04:18:38.405078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.405108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:51.079 [2024-12-06 04:18:38.405119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.174 ms 00:27:51.079 [2024-12-06 04:18:38.405126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.405217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.405230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:51.079 [2024-12-06 04:18:38.405239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:51.079 [2024-12-06 04:18:38.405246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.405297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.405307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:51.079 [2024-12-06 04:18:38.405316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:51.079 [2024-12-06 04:18:38.405323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.405342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.405351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:51.079 [2024-12-06 04:18:38.405361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:51.079 [2024-12-06 04:18:38.405369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.405400] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:51.079 [2024-12-06 04:18:38.405410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.405418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:51.079 [2024-12-06 04:18:38.405426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:51.079 [2024-12-06 04:18:38.405433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.427967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.428015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:51.079 [2024-12-06 04:18:38.428028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.514 ms 00:27:51.079 [2024-12-06 04:18:38.428035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.428105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.079 [2024-12-06 04:18:38.428114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:51.079 [2024-12-06 04:18:38.428123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:51.079 [2024-12-06 04:18:38.428130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.079 [2024-12-06 04:18:38.429425] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2443.298 ms, result 0 00:27:51.079 [2024-12-06 04:18:38.444323] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.079 [2024-12-06 04:18:38.460318] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:51.079 [2024-12-06 04:18:38.468425] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:51.079 04:18:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.079 04:18:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:51.079 04:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:51.079 04:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:51.079 04:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:51.338 [2024-12-06 04:18:38.692524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.338 [2024-12-06 04:18:38.692575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:51.338 [2024-12-06 04:18:38.692592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:51.338 [2024-12-06 04:18:38.692600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.338 [2024-12-06 04:18:38.692624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.338 [2024-12-06 04:18:38.692634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:51.338 [2024-12-06 04:18:38.692642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:51.338 [2024-12-06 04:18:38.692649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.338 [2024-12-06 04:18:38.692668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.338 [2024-12-06 04:18:38.692676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:51.338 [2024-12-06 04:18:38.692684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:51.338 [2024-12-06 04:18:38.692691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.338 [2024-12-06 04:18:38.692761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.217 ms, result 0 00:27:51.338 true 00:27:51.338 04:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:51.596 { 00:27:51.597 "name": "ftl", 00:27:51.597 "properties": [ 00:27:51.597 { 00:27:51.597 "name": "superblock_version", 00:27:51.597 "value": 5, 00:27:51.597 "read-only": true 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "name": "base_device", 00:27:51.597 "bands": [ 00:27:51.597 { 00:27:51.597 "id": 0, 00:27:51.597 "state": "CLOSED", 00:27:51.597 "validity": 1.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 1, 00:27:51.597 "state": "CLOSED", 00:27:51.597 "validity": 1.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 2, 00:27:51.597 "state": "CLOSED", 00:27:51.597 "validity": 0.007843137254901933 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 3, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 4, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 5, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 6, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 7, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 8, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 9, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 10, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 11, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 12, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 13, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 14, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 15, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 16, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 17, 00:27:51.597 "state": "FREE", 00:27:51.597 "validity": 0.0 00:27:51.597 } 00:27:51.597 ], 00:27:51.597 "read-only": true 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "name": "cache_device", 00:27:51.597 "type": "bdev", 00:27:51.597 "chunks": [ 00:27:51.597 { 00:27:51.597 "id": 0, 00:27:51.597 "state": "INACTIVE", 00:27:51.597 "utilization": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 1, 00:27:51.597 "state": "OPEN", 00:27:51.597 "utilization": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 2, 00:27:51.597 "state": "OPEN", 00:27:51.597 "utilization": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 3, 00:27:51.597 "state": "FREE", 00:27:51.597 "utilization": 0.0 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "id": 4, 00:27:51.597 "state": "FREE", 00:27:51.597 "utilization": 0.0 00:27:51.597 } 00:27:51.597 ], 00:27:51.597 "read-only": true 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "name": "verbose_mode", 00:27:51.597 "value": true, 00:27:51.597 "unit": "", 00:27:51.597 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:51.597 }, 00:27:51.597 { 00:27:51.597 "name": "prep_upgrade_on_shutdown", 00:27:51.597 "value": false, 00:27:51.597 "unit": "", 00:27:51.597 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:51.597 } 00:27:51.597 ] 00:27:51.597 } 00:27:51.597 04:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:51.597 04:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:51.597 04:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:51.597 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:51.597 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:51.597 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:51.597 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:51.597 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:51.855 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:51.855 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:51.856 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:51.856 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:51.856 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:51.856 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:51.856 Validate MD5 checksum, iteration 1 00:27:51.856 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:51.856 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:51.856 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:51.856 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:51.856 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:51.856 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:51.856 04:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:52.114 [2024-12-06 04:18:39.383265] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:27:52.114 [2024-12-06 04:18:39.383379] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80894 ] 00:27:52.114 [2024-12-06 04:18:39.541948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.114 [2024-12-06 04:18:39.638470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.014  [2024-12-06T04:18:41.799Z] Copying: 673/1024 [MB] (673 MBps) [2024-12-06T04:18:42.735Z] Copying: 1024/1024 [MB] (average 664 MBps) 00:27:55.208 00:27:55.208 04:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:55.208 04:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:57.112 04:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:57.112 Validate MD5 checksum, iteration 2 00:27:57.112 04:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=de660a71cfc274d8d5d2a8a2e812e038 00:27:57.112 04:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ de660a71cfc274d8d5d2a8a2e812e038 != \d\e\6\6\0\a\7\1\c\f\c\2\7\4\d\8\d\5\d\2\a\8\a\2\e\8\1\2\e\0\3\8 ]] 00:27:57.112 04:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:57.112 04:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:57.112 04:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:57.112 04:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:57.112 04:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:57.112 04:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:57.112 04:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:57.112 04:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:57.112 04:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:57.112 [2024-12-06 04:18:44.548065] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:27:57.112 [2024-12-06 04:18:44.548182] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80953 ] 00:27:57.370 [2024-12-06 04:18:44.708600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.370 [2024-12-06 04:18:44.806926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.271  [2024-12-06T04:18:46.798Z] Copying: 687/1024 [MB] (687 MBps) [2024-12-06T04:18:47.729Z] Copying: 1024/1024 [MB] (average 693 MBps) 00:28:00.202 00:28:00.202 04:18:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:00.202 04:18:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:02.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a2af19ac5e4fe7e6021bc3746742d25e 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a2af19ac5e4fe7e6021bc3746742d25e != \a\2\a\f\1\9\a\c\5\e\4\f\e\7\e\6\0\2\1\b\c\3\7\4\6\7\4\2\d\2\5\e ]] 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80823 ]] 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80823 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81015 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81015 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81015 ']' 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:02.103 04:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:02.103 [2024-12-06 04:18:49.539990] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:28:02.103 [2024-12-06 04:18:49.540109] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81015 ] 00:28:02.103 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 80823 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:02.362 [2024-12-06 04:18:49.697325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.362 [2024-12-06 04:18:49.774607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.929 [2024-12-06 04:18:50.352732] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:02.929 [2024-12-06 04:18:50.352791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:03.188 [2024-12-06 04:18:50.495999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.188 [2024-12-06 04:18:50.496045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:03.188 [2024-12-06 04:18:50.496058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:03.188 [2024-12-06 04:18:50.496066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.188 [2024-12-06 04:18:50.496115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.188 [2024-12-06 04:18:50.496126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:03.188 [2024-12-06 04:18:50.496134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:03.188 [2024-12-06 04:18:50.496141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.188 [2024-12-06 04:18:50.496162] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:03.188 [2024-12-06 04:18:50.496877] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:03.188 [2024-12-06 04:18:50.496899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.188 [2024-12-06 04:18:50.496907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:03.188 [2024-12-06 04:18:50.496917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.745 ms 00:28:03.188 [2024-12-06 04:18:50.496924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.188 [2024-12-06 04:18:50.497271] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:03.188 [2024-12-06 04:18:50.512579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.188 [2024-12-06 04:18:50.512614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:03.188 [2024-12-06 04:18:50.512626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.308 ms 00:28:03.188 [2024-12-06 04:18:50.512633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.188 [2024-12-06 04:18:50.521325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.188 [2024-12-06 04:18:50.521358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:03.188 [2024-12-06 04:18:50.521368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:28:03.188 [2024-12-06 04:18:50.521376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.188 [2024-12-06 04:18:50.521678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.188 [2024-12-06 04:18:50.521699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:03.188 [2024-12-06 04:18:50.521709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:28:03.188 [2024-12-06 04:18:50.521733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.188 [2024-12-06 04:18:50.521781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.188 [2024-12-06 04:18:50.521791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:03.188 [2024-12-06 04:18:50.521799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:03.188 [2024-12-06 04:18:50.521807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.188 [2024-12-06 04:18:50.521831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.188 [2024-12-06 04:18:50.521840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:03.188 [2024-12-06 04:18:50.521848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:03.188 [2024-12-06 04:18:50.521855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.188 [2024-12-06 04:18:50.521874] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:03.188 [2024-12-06 04:18:50.524736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.188 [2024-12-06 04:18:50.524769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:03.188 [2024-12-06 04:18:50.524778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.866 ms 00:28:03.188 [2024-12-06 04:18:50.524786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.188 [2024-12-06 04:18:50.524825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.188 [2024-12-06 04:18:50.524837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:03.189 [2024-12-06 04:18:50.524845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:03.189 [2024-12-06 04:18:50.524852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.524871] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:03.189 [2024-12-06 04:18:50.524889] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:03.189 [2024-12-06 04:18:50.524922] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:03.189 [2024-12-06 04:18:50.524943] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:03.189 [2024-12-06 04:18:50.525047] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:03.189 [2024-12-06 04:18:50.525062] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:03.189 [2024-12-06 04:18:50.525073] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:03.189 [2024-12-06 04:18:50.525082] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:03.189 [2024-12-06 04:18:50.525094] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:03.189 [2024-12-06 04:18:50.525105] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:03.189 [2024-12-06 04:18:50.525113] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:03.189 [2024-12-06 04:18:50.525125] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:03.189 [2024-12-06 04:18:50.525131] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:03.189 [2024-12-06 04:18:50.525141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.525156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:03.189 [2024-12-06 04:18:50.525164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:28:03.189 [2024-12-06 04:18:50.525178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.525278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.525293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:03.189 [2024-12-06 04:18:50.525300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:28:03.189 [2024-12-06 04:18:50.525311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.525424] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:03.189 [2024-12-06 04:18:50.525442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:03.189 [2024-12-06 04:18:50.525451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:03.189 [2024-12-06 04:18:50.525458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.189 [2024-12-06 04:18:50.525470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:03.189 [2024-12-06 04:18:50.525477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:03.189 [2024-12-06 04:18:50.525484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:03.189 [2024-12-06 04:18:50.525491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:03.189 [2024-12-06 04:18:50.525498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:03.189 [2024-12-06 04:18:50.525504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.189 [2024-12-06 04:18:50.525511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:03.189 [2024-12-06 04:18:50.525518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:03.189 [2024-12-06 04:18:50.525524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.189 [2024-12-06 04:18:50.525531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:03.189 [2024-12-06 04:18:50.525537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:03.189 [2024-12-06 04:18:50.525543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.189 [2024-12-06 04:18:50.525550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:03.189 [2024-12-06 04:18:50.525558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:03.189 [2024-12-06 04:18:50.525564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.189 [2024-12-06 04:18:50.525571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:03.189 [2024-12-06 04:18:50.525578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:03.189 [2024-12-06 04:18:50.525589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.189 [2024-12-06 04:18:50.525596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:03.189 [2024-12-06 04:18:50.525602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:03.189 [2024-12-06 04:18:50.525608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.189 [2024-12-06 04:18:50.525614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:03.189 [2024-12-06 04:18:50.525622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:03.189 [2024-12-06 04:18:50.525628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.189 [2024-12-06 04:18:50.525635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:03.189 [2024-12-06 04:18:50.525642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:03.189 [2024-12-06 04:18:50.525648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.189 [2024-12-06 04:18:50.525655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:03.189 [2024-12-06 04:18:50.525661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:03.189 [2024-12-06 04:18:50.525667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.189 [2024-12-06 04:18:50.525674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:03.189 [2024-12-06 04:18:50.525680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:03.189 [2024-12-06 04:18:50.525687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.189 [2024-12-06 04:18:50.525693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:03.189 [2024-12-06 04:18:50.525700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:03.189 [2024-12-06 04:18:50.525706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.189 [2024-12-06 04:18:50.525712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:03.189 [2024-12-06 04:18:50.525730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:03.189 [2024-12-06 04:18:50.525736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.189 [2024-12-06 04:18:50.525743] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:03.189 [2024-12-06 04:18:50.525750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:03.189 [2024-12-06 04:18:50.525757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:03.189 [2024-12-06 04:18:50.525764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.189 [2024-12-06 04:18:50.525775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:03.189 [2024-12-06 04:18:50.525782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:03.189 [2024-12-06 04:18:50.525789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:03.189 [2024-12-06 04:18:50.525796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:03.189 [2024-12-06 04:18:50.525802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:03.189 [2024-12-06 04:18:50.525809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:03.189 [2024-12-06 04:18:50.525817] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:03.189 [2024-12-06 04:18:50.525830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.189 [2024-12-06 04:18:50.525838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:03.189 [2024-12-06 04:18:50.525845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:03.189 [2024-12-06 04:18:50.525852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:03.189 [2024-12-06 04:18:50.525859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:03.189 [2024-12-06 04:18:50.525866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:03.189 [2024-12-06 04:18:50.525876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:03.189 [2024-12-06 04:18:50.525884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:03.189 [2024-12-06 04:18:50.525891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:03.189 [2024-12-06 04:18:50.525897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:03.189 [2024-12-06 04:18:50.525904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:03.189 [2024-12-06 04:18:50.525911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:03.189 [2024-12-06 04:18:50.525918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:03.189 [2024-12-06 04:18:50.525925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:03.189 [2024-12-06 04:18:50.525932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:03.189 [2024-12-06 04:18:50.525939] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:03.189 [2024-12-06 04:18:50.525947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.189 [2024-12-06 04:18:50.525957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:03.189 [2024-12-06 04:18:50.525965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:03.189 [2024-12-06 04:18:50.525972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:03.189 [2024-12-06 04:18:50.525978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:03.189 [2024-12-06 04:18:50.525986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.525992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:03.189 [2024-12-06 04:18:50.526000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.630 ms 00:28:03.189 [2024-12-06 04:18:50.526011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.549883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.549920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:03.189 [2024-12-06 04:18:50.549931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.820 ms 00:28:03.189 [2024-12-06 04:18:50.549939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.549979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.549987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:03.189 [2024-12-06 04:18:50.549995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:03.189 [2024-12-06 04:18:50.550002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.579749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.579784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:03.189 [2024-12-06 04:18:50.579795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.693 ms 00:28:03.189 [2024-12-06 04:18:50.579802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.579830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.579837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:03.189 [2024-12-06 04:18:50.579845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:03.189 [2024-12-06 04:18:50.579855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.579944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.579954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:03.189 [2024-12-06 04:18:50.579962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:03.189 [2024-12-06 04:18:50.579970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.580007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.580014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:03.189 [2024-12-06 04:18:50.580022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:03.189 [2024-12-06 04:18:50.580029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.593641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.593676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:03.189 [2024-12-06 04:18:50.593686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.588 ms 00:28:03.189 [2024-12-06 04:18:50.593693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.593820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.593832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:03.189 [2024-12-06 04:18:50.593840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:03.189 [2024-12-06 04:18:50.593848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.628703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.628753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:03.189 [2024-12-06 04:18:50.628765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.837 ms 00:28:03.189 [2024-12-06 04:18:50.628774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.638120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.638152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:03.189 [2024-12-06 04:18:50.638168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.522 ms 00:28:03.189 [2024-12-06 04:18:50.638175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.189 [2024-12-06 04:18:50.691520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.189 [2024-12-06 04:18:50.691574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:03.190 [2024-12-06 04:18:50.691587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.293 ms 00:28:03.190 [2024-12-06 04:18:50.691596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.190 [2024-12-06 04:18:50.691751] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:03.190 [2024-12-06 04:18:50.691850] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:03.190 [2024-12-06 04:18:50.691949] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:03.190 [2024-12-06 04:18:50.692046] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:03.190 [2024-12-06 04:18:50.692055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.190 [2024-12-06 04:18:50.692063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:03.190 [2024-12-06 04:18:50.692072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.407 ms 00:28:03.190 [2024-12-06 04:18:50.692079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.190 [2024-12-06 04:18:50.692136] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:03.190 [2024-12-06 04:18:50.692148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.190 [2024-12-06 04:18:50.692159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:03.190 [2024-12-06 04:18:50.692168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:03.190 [2024-12-06 04:18:50.692175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.190 [2024-12-06 04:18:50.706678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.190 [2024-12-06 04:18:50.706726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:03.190 [2024-12-06 04:18:50.706737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.482 ms 00:28:03.190 [2024-12-06 04:18:50.706745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.448 [2024-12-06 04:18:50.715371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.448 [2024-12-06 04:18:50.715404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:03.448 [2024-12-06 04:18:50.715413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:03.448 [2024-12-06 04:18:50.715421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.448 [2024-12-06 04:18:50.715504] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:03.448 [2024-12-06 04:18:50.715627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.449 [2024-12-06 04:18:50.715644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:03.449 [2024-12-06 04:18:50.715654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.124 ms 00:28:03.449 [2024-12-06 04:18:50.715661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.707 [2024-12-06 04:18:51.143618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.707 [2024-12-06 04:18:51.143685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:03.707 [2024-12-06 04:18:51.143700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 427.170 ms 00:28:03.707 [2024-12-06 04:18:51.143710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.707 [2024-12-06 04:18:51.147486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.707 [2024-12-06 04:18:51.147524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:03.707 [2024-12-06 04:18:51.147535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.723 ms 00:28:03.707 [2024-12-06 04:18:51.147543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.707 [2024-12-06 04:18:51.147862] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:03.707 [2024-12-06 04:18:51.147895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.707 [2024-12-06 04:18:51.147904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:03.707 [2024-12-06 04:18:51.147913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.322 ms 00:28:03.707 [2024-12-06 04:18:51.147921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.707 [2024-12-06 04:18:51.147948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.707 [2024-12-06 04:18:51.147957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:03.707 [2024-12-06 04:18:51.147965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:03.707 [2024-12-06 04:18:51.147977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.707 [2024-12-06 04:18:51.148009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 432.503 ms, result 0 00:28:03.707 [2024-12-06 04:18:51.148045] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:03.707 [2024-12-06 04:18:51.148139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.707 [2024-12-06 04:18:51.148157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:03.707 [2024-12-06 04:18:51.148165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:28:03.707 [2024-12-06 04:18:51.148172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.275 [2024-12-06 04:18:51.587876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.275 [2024-12-06 04:18:51.587938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:04.275 [2024-12-06 04:18:51.587965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 438.816 ms 00:28:04.275 [2024-12-06 04:18:51.587973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.275 [2024-12-06 04:18:51.591688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.275 [2024-12-06 04:18:51.591733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:04.275 [2024-12-06 04:18:51.591743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.837 ms 00:28:04.275 [2024-12-06 04:18:51.591750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.275 [2024-12-06 04:18:51.592355] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:04.275 [2024-12-06 04:18:51.592399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.275 [2024-12-06 04:18:51.592409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:04.275 [2024-12-06 04:18:51.592420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.622 ms 00:28:04.275 [2024-12-06 04:18:51.592427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.275 [2024-12-06 04:18:51.592463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.275 [2024-12-06 04:18:51.592473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:04.275 [2024-12-06 04:18:51.592481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:04.275 [2024-12-06 04:18:51.592488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.275 [2024-12-06 04:18:51.592523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 444.472 ms, result 0 00:28:04.275 [2024-12-06 04:18:51.592564] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:04.275 [2024-12-06 04:18:51.592574] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:04.275 [2024-12-06 04:18:51.592584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.275 [2024-12-06 04:18:51.592593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:04.275 [2024-12-06 04:18:51.592601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 877.097 ms 00:28:04.275 [2024-12-06 04:18:51.592608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.275 [2024-12-06 04:18:51.592636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.275 [2024-12-06 04:18:51.592647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:04.275 [2024-12-06 04:18:51.592656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:04.275 [2024-12-06 04:18:51.592663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.275 [2024-12-06 04:18:51.603469] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:04.275 [2024-12-06 04:18:51.603570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.275 [2024-12-06 04:18:51.603580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:04.275 [2024-12-06 04:18:51.603590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.892 ms 00:28:04.275 [2024-12-06 04:18:51.603597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.275 [2024-12-06 04:18:51.604307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.275 [2024-12-06 04:18:51.604330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:04.275 [2024-12-06 04:18:51.604342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.633 ms 00:28:04.275 [2024-12-06 04:18:51.604349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.275 [2024-12-06 04:18:51.606573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.275 [2024-12-06 04:18:51.606596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:04.275 [2024-12-06 04:18:51.606606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.208 ms 00:28:04.275 [2024-12-06 04:18:51.606614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.275 [2024-12-06 04:18:51.606650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.275 [2024-12-06 04:18:51.606658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:04.275 [2024-12-06 04:18:51.606666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:04.275 [2024-12-06 04:18:51.606677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.275 [2024-12-06 04:18:51.606781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.275 [2024-12-06 04:18:51.606792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:04.275 [2024-12-06 04:18:51.606800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:04.275 [2024-12-06 04:18:51.606807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.275 [2024-12-06 04:18:51.606826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.276 [2024-12-06 04:18:51.606834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:04.276 [2024-12-06 04:18:51.606841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:04.276 [2024-12-06 04:18:51.606848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.276 [2024-12-06 04:18:51.606878] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:04.276 [2024-12-06 04:18:51.606887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.276 [2024-12-06 04:18:51.606895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:04.276 [2024-12-06 04:18:51.606902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:04.276 [2024-12-06 04:18:51.606909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.276 [2024-12-06 04:18:51.606956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.276 [2024-12-06 04:18:51.606964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:04.276 [2024-12-06 04:18:51.606972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:04.276 [2024-12-06 04:18:51.606979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.276 [2024-12-06 04:18:51.607856] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1111.441 ms, result 0 00:28:04.276 [2024-12-06 04:18:51.620202] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.276 [2024-12-06 04:18:51.636194] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:04.276 [2024-12-06 04:18:51.644310] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:04.535 Validate MD5 checksum, iteration 1 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:04.535 04:18:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:04.818 [2024-12-06 04:18:52.084668] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:28:04.818 [2024-12-06 04:18:52.084800] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81046 ] 00:28:04.818 [2024-12-06 04:18:52.240589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.818 [2024-12-06 04:18:52.319134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.721  [2024-12-06T04:18:54.248Z] Copying: 766/1024 [MB] (766 MBps) [2024-12-06T04:19:00.861Z] Copying: 1024/1024 [MB] (average 749 MBps) 00:28:13.334 00:28:13.334 04:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:13.334 04:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:14.731 04:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:14.731 Validate MD5 checksum, iteration 2 00:28:14.731 04:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=de660a71cfc274d8d5d2a8a2e812e038 00:28:14.731 04:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ de660a71cfc274d8d5d2a8a2e812e038 != \d\e\6\6\0\a\7\1\c\f\c\2\7\4\d\8\d\5\d\2\a\8\a\2\e\8\1\2\e\0\3\8 ]] 00:28:14.731 04:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:14.731 04:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:14.731 04:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:14.731 04:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:14.731 04:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:14.731 04:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:14.731 04:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:14.731 04:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:14.731 04:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:14.731 [2024-12-06 04:19:01.895585] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:28:14.731 [2024-12-06 04:19:01.895697] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81153 ] 00:28:14.731 [2024-12-06 04:19:02.051616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.731 [2024-12-06 04:19:02.128732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.632  [2024-12-06T04:19:04.418Z] Copying: 657/1024 [MB] (657 MBps) [2024-12-06T04:19:05.352Z] Copying: 1024/1024 [MB] (average 653 MBps) 00:28:17.825 00:28:17.825 04:19:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:17.825 04:19:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:19.722 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:19.722 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a2af19ac5e4fe7e6021bc3746742d25e 00:28:19.722 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a2af19ac5e4fe7e6021bc3746742d25e != \a\2\a\f\1\9\a\c\5\e\4\f\e\7\e\6\0\2\1\b\c\3\7\4\6\7\4\2\d\2\5\e ]] 00:28:19.722 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:19.722 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:19.722 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:19.722 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:19.722 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:19.722 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81015 ]] 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81015 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81015 ']' 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81015 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81015 00:28:19.982 killing process with pid 81015 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81015' 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81015 00:28:19.982 04:19:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81015 00:28:20.550 [2024-12-06 04:19:07.892827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:20.550 [2024-12-06 04:19:07.904027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.550 [2024-12-06 04:19:07.904068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:20.550 [2024-12-06 04:19:07.904078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:20.550 [2024-12-06 04:19:07.904085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.550 [2024-12-06 04:19:07.904103] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:20.550 [2024-12-06 04:19:07.906193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.550 [2024-12-06 04:19:07.906219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:20.550 [2024-12-06 04:19:07.906231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.079 ms 00:28:20.550 [2024-12-06 04:19:07.906238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.550 [2024-12-06 04:19:07.906436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.550 [2024-12-06 04:19:07.906455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:20.550 [2024-12-06 04:19:07.906469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:28:20.550 [2024-12-06 04:19:07.906475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.550 [2024-12-06 04:19:07.907588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.550 [2024-12-06 04:19:07.907612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:20.550 [2024-12-06 04:19:07.907620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.100 ms 00:28:20.550 [2024-12-06 04:19:07.907629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.550 [2024-12-06 04:19:07.908484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.550 [2024-12-06 04:19:07.908505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:20.550 [2024-12-06 04:19:07.908513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.831 ms 00:28:20.550 [2024-12-06 04:19:07.908519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.550 [2024-12-06 04:19:07.916045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.550 [2024-12-06 04:19:07.916077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:20.550 [2024-12-06 04:19:07.916085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.498 ms 00:28:20.550 [2024-12-06 04:19:07.916095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.550 [2024-12-06 04:19:07.919957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.550 [2024-12-06 04:19:07.919987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:20.550 [2024-12-06 04:19:07.919996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.833 ms 00:28:20.550 [2024-12-06 04:19:07.920002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.550 [2024-12-06 04:19:07.920065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.550 [2024-12-06 04:19:07.920073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:20.550 [2024-12-06 04:19:07.920079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:20.550 [2024-12-06 04:19:07.920089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.550 [2024-12-06 04:19:07.927349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.550 [2024-12-06 04:19:07.927377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:20.550 [2024-12-06 04:19:07.927384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.246 ms 00:28:20.550 [2024-12-06 04:19:07.927390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.550 [2024-12-06 04:19:07.934916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.550 [2024-12-06 04:19:07.934943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:20.550 [2024-12-06 04:19:07.934951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.500 ms 00:28:20.550 [2024-12-06 04:19:07.934956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.550 [2024-12-06 04:19:07.941860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.550 [2024-12-06 04:19:07.941887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:20.550 [2024-12-06 04:19:07.941894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.878 ms 00:28:20.551 [2024-12-06 04:19:07.941900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.551 [2024-12-06 04:19:07.949003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.551 [2024-12-06 04:19:07.949030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:20.551 [2024-12-06 04:19:07.949037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.056 ms 00:28:20.551 [2024-12-06 04:19:07.949042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.551 [2024-12-06 04:19:07.949067] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:20.551 [2024-12-06 04:19:07.949079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:20.551 [2024-12-06 04:19:07.949088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:20.551 [2024-12-06 04:19:07.949094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:20.551 [2024-12-06 04:19:07.949101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:20.551 [2024-12-06 04:19:07.949189] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:20.551 [2024-12-06 04:19:07.949194] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 3cd2e71b-32c0-46ae-b13c-98922401b3c9 00:28:20.551 [2024-12-06 04:19:07.949201] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:20.551 [2024-12-06 04:19:07.949206] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:20.551 [2024-12-06 04:19:07.949212] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:20.551 [2024-12-06 04:19:07.949217] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:20.551 [2024-12-06 04:19:07.949223] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:20.551 [2024-12-06 04:19:07.949229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:20.551 [2024-12-06 04:19:07.949238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:20.551 [2024-12-06 04:19:07.949244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:20.551 [2024-12-06 04:19:07.949249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:20.551 [2024-12-06 04:19:07.949254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.551 [2024-12-06 04:19:07.949263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:20.551 [2024-12-06 04:19:07.949270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.189 ms 00:28:20.551 [2024-12-06 04:19:07.949276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.551 [2024-12-06 04:19:07.958698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.551 [2024-12-06 04:19:07.958740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:20.551 [2024-12-06 04:19:07.958748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.408 ms 00:28:20.551 [2024-12-06 04:19:07.958755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.551 [2024-12-06 04:19:07.959024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.551 [2024-12-06 04:19:07.959041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:20.551 [2024-12-06 04:19:07.959049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:28:20.551 [2024-12-06 04:19:07.959054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.551 [2024-12-06 04:19:07.991554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:20.551 [2024-12-06 04:19:07.991593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:20.551 [2024-12-06 04:19:07.991602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:20.551 [2024-12-06 04:19:07.991609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.551 [2024-12-06 04:19:07.991648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:20.551 [2024-12-06 04:19:07.991654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:20.551 [2024-12-06 04:19:07.991660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:20.551 [2024-12-06 04:19:07.991666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.551 [2024-12-06 04:19:07.991738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:20.551 [2024-12-06 04:19:07.991746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:20.551 [2024-12-06 04:19:07.991753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:20.551 [2024-12-06 04:19:07.991759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.551 [2024-12-06 04:19:07.991775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:20.551 [2024-12-06 04:19:07.991782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:20.551 [2024-12-06 04:19:07.991788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:20.551 [2024-12-06 04:19:07.991794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.551 [2024-12-06 04:19:08.051000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:20.551 [2024-12-06 04:19:08.051040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:20.551 [2024-12-06 04:19:08.051050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:20.551 [2024-12-06 04:19:08.051056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.810 [2024-12-06 04:19:08.099097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:20.811 [2024-12-06 04:19:08.099138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:20.811 [2024-12-06 04:19:08.099147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:20.811 [2024-12-06 04:19:08.099154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.811 [2024-12-06 04:19:08.099225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:20.811 [2024-12-06 04:19:08.099234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:20.811 [2024-12-06 04:19:08.099240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:20.811 [2024-12-06 04:19:08.099246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.811 [2024-12-06 04:19:08.099280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:20.811 [2024-12-06 04:19:08.099296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:20.811 [2024-12-06 04:19:08.099302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:20.811 [2024-12-06 04:19:08.099307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.811 [2024-12-06 04:19:08.099376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:20.811 [2024-12-06 04:19:08.099383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:20.811 [2024-12-06 04:19:08.099389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:20.811 [2024-12-06 04:19:08.099395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.811 [2024-12-06 04:19:08.099419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:20.811 [2024-12-06 04:19:08.099425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:20.811 [2024-12-06 04:19:08.099433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:20.811 [2024-12-06 04:19:08.099440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.811 [2024-12-06 04:19:08.099469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:20.811 [2024-12-06 04:19:08.099476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:20.811 [2024-12-06 04:19:08.099481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:20.811 [2024-12-06 04:19:08.099488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.811 [2024-12-06 04:19:08.099519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:20.811 [2024-12-06 04:19:08.099529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:20.811 [2024-12-06 04:19:08.099535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:20.811 [2024-12-06 04:19:08.099541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.811 [2024-12-06 04:19:08.099635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 195.586 ms, result 0 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:21.421 Remove shared memory files 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80823 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:21.421 00:28:21.421 real 1m17.711s 00:28:21.421 user 1m47.880s 00:28:21.421 sys 0m16.877s 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:21.421 04:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:21.421 ************************************ 00:28:21.422 END TEST ftl_upgrade_shutdown 00:28:21.422 ************************************ 00:28:21.422 04:19:08 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:28:21.422 04:19:08 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:21.422 04:19:08 ftl -- ftl/ftl.sh@14 -- # killprocess 74970 00:28:21.422 04:19:08 ftl -- common/autotest_common.sh@954 -- # '[' -z 74970 ']' 00:28:21.422 04:19:08 ftl -- common/autotest_common.sh@958 -- # kill -0 74970 00:28:21.422 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74970) - No such process 00:28:21.422 Process with pid 74970 is not found 00:28:21.422 04:19:08 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74970 is not found' 00:28:21.422 04:19:08 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:21.422 04:19:08 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81255 00:28:21.422 04:19:08 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81255 00:28:21.422 04:19:08 ftl -- common/autotest_common.sh@835 -- # '[' -z 81255 ']' 00:28:21.422 04:19:08 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.422 04:19:08 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.422 04:19:08 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:21.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.422 04:19:08 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.422 04:19:08 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.422 04:19:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:21.422 [2024-12-06 04:19:08.864616] Starting SPDK v25.01-pre git sha1 02b805e62 / DPDK 24.03.0 initialization... 00:28:21.422 [2024-12-06 04:19:08.865090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81255 ] 00:28:21.680 [2024-12-06 04:19:09.020547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.680 [2024-12-06 04:19:09.100768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.247 04:19:09 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.247 04:19:09 ftl -- common/autotest_common.sh@868 -- # return 0 00:28:22.247 04:19:09 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:22.505 nvme0n1 00:28:22.505 04:19:09 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:22.505 04:19:09 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:22.505 04:19:09 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:22.763 04:19:10 ftl -- ftl/common.sh@28 -- # stores=24686a7f-610c-4fb0-9011-ca6e7f3659c7 00:28:22.764 04:19:10 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:22.764 04:19:10 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 24686a7f-610c-4fb0-9011-ca6e7f3659c7 00:28:23.022 04:19:10 ftl -- ftl/ftl.sh@23 -- # killprocess 81255 00:28:23.022 04:19:10 ftl -- common/autotest_common.sh@954 -- # '[' -z 81255 ']' 00:28:23.022 04:19:10 ftl -- common/autotest_common.sh@958 -- # kill -0 81255 00:28:23.022 04:19:10 ftl -- common/autotest_common.sh@959 -- # uname 00:28:23.022 04:19:10 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.022 04:19:10 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81255 00:28:23.022 04:19:10 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.022 killing process with pid 81255 00:28:23.022 04:19:10 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.022 04:19:10 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81255' 00:28:23.022 04:19:10 ftl -- common/autotest_common.sh@973 -- # kill 81255 00:28:23.022 04:19:10 ftl -- common/autotest_common.sh@978 -- # wait 81255 00:28:24.397 04:19:11 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:24.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:24.397 Waiting for block devices as requested 00:28:24.397 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:24.397 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:24.656 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:24.656 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:29.920 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:29.920 Remove shared memory files 00:28:29.920 04:19:17 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:29.920 04:19:17 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:29.920 04:19:17 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:29.920 04:19:17 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:29.920 04:19:17 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:29.920 04:19:17 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:29.920 04:19:17 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:29.920 ************************************ 00:28:29.920 END TEST ftl 00:28:29.920 ************************************ 00:28:29.920 00:28:29.920 real 8m59.545s 00:28:29.920 user 11m9.637s 00:28:29.920 sys 1m11.259s 00:28:29.920 04:19:17 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.920 04:19:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:29.920 04:19:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:29.920 04:19:17 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:29.920 04:19:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:29.920 04:19:17 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:28:29.920 04:19:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:29.920 04:19:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:29.920 04:19:17 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:29.920 04:19:17 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:28:29.920 04:19:17 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:28:29.920 04:19:17 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:28:29.920 04:19:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:29.920 04:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:29.920 04:19:17 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:28:29.920 04:19:17 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:28:29.920 04:19:17 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:28:29.920 04:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:30.855 INFO: APP EXITING 00:28:30.855 INFO: killing all VMs 00:28:30.855 INFO: killing vhost app 00:28:30.855 INFO: EXIT DONE 00:28:31.114 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:31.374 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:31.374 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:31.374 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:31.374 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:31.941 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:31.941 Cleaning 00:28:31.941 Removing: /var/run/dpdk/spdk0/config 00:28:31.941 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:31.941 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:31.941 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:31.941 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:32.200 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:32.200 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:32.200 Removing: /var/run/dpdk/spdk0 00:28:32.200 Removing: /var/run/dpdk/spdk_pid56957 00:28:32.200 Removing: /var/run/dpdk/spdk_pid57148 00:28:32.200 Removing: /var/run/dpdk/spdk_pid57355 00:28:32.200 Removing: /var/run/dpdk/spdk_pid57448 00:28:32.200 Removing: /var/run/dpdk/spdk_pid57488 00:28:32.200 Removing: /var/run/dpdk/spdk_pid57610 00:28:32.200 Removing: /var/run/dpdk/spdk_pid57623 00:28:32.200 Removing: /var/run/dpdk/spdk_pid57816 00:28:32.200 Removing: /var/run/dpdk/spdk_pid57909 00:28:32.200 Removing: /var/run/dpdk/spdk_pid58005 00:28:32.200 Removing: /var/run/dpdk/spdk_pid58116 00:28:32.200 Removing: /var/run/dpdk/spdk_pid58208 00:28:32.200 Removing: /var/run/dpdk/spdk_pid58247 00:28:32.200 Removing: /var/run/dpdk/spdk_pid58284 00:28:32.200 Removing: /var/run/dpdk/spdk_pid58354 00:28:32.200 Removing: /var/run/dpdk/spdk_pid58444 00:28:32.200 Removing: /var/run/dpdk/spdk_pid58880 00:28:32.200 Removing: /var/run/dpdk/spdk_pid58944 00:28:32.200 Removing: /var/run/dpdk/spdk_pid58996 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59012 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59120 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59130 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59238 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59253 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59307 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59325 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59377 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59390 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59550 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59587 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59670 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59837 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59921 00:28:32.200 Removing: /var/run/dpdk/spdk_pid59963 00:28:32.200 Removing: /var/run/dpdk/spdk_pid60385 00:28:32.200 Removing: /var/run/dpdk/spdk_pid60483 00:28:32.200 Removing: /var/run/dpdk/spdk_pid60594 00:28:32.200 Removing: /var/run/dpdk/spdk_pid60663 00:28:32.200 Removing: /var/run/dpdk/spdk_pid60694 00:28:32.200 Removing: /var/run/dpdk/spdk_pid60777 00:28:32.200 Removing: /var/run/dpdk/spdk_pid61393 00:28:32.200 Removing: /var/run/dpdk/spdk_pid61424 00:28:32.200 Removing: /var/run/dpdk/spdk_pid61888 00:28:32.200 Removing: /var/run/dpdk/spdk_pid61986 00:28:32.200 Removing: /var/run/dpdk/spdk_pid62095 00:28:32.200 Removing: /var/run/dpdk/spdk_pid62148 00:28:32.200 Removing: /var/run/dpdk/spdk_pid62179 00:28:32.200 Removing: /var/run/dpdk/spdk_pid62199 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64035 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64171 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64176 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64188 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64235 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64239 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64251 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64296 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64300 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64312 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64357 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64361 00:28:32.200 Removing: /var/run/dpdk/spdk_pid64373 00:28:32.200 Removing: /var/run/dpdk/spdk_pid65757 00:28:32.200 Removing: /var/run/dpdk/spdk_pid65854 00:28:32.200 Removing: /var/run/dpdk/spdk_pid67259 00:28:32.200 Removing: /var/run/dpdk/spdk_pid68990 00:28:32.200 Removing: /var/run/dpdk/spdk_pid69059 00:28:32.200 Removing: /var/run/dpdk/spdk_pid69135 00:28:32.200 Removing: /var/run/dpdk/spdk_pid69240 00:28:32.200 Removing: /var/run/dpdk/spdk_pid69337 00:28:32.200 Removing: /var/run/dpdk/spdk_pid69435 00:28:32.200 Removing: /var/run/dpdk/spdk_pid69509 00:28:32.200 Removing: /var/run/dpdk/spdk_pid69584 00:28:32.200 Removing: /var/run/dpdk/spdk_pid69694 00:28:32.200 Removing: /var/run/dpdk/spdk_pid69786 00:28:32.200 Removing: /var/run/dpdk/spdk_pid69886 00:28:32.200 Removing: /var/run/dpdk/spdk_pid69952 00:28:32.200 Removing: /var/run/dpdk/spdk_pid70032 00:28:32.200 Removing: /var/run/dpdk/spdk_pid70136 00:28:32.200 Removing: /var/run/dpdk/spdk_pid70228 00:28:32.200 Removing: /var/run/dpdk/spdk_pid70329 00:28:32.200 Removing: /var/run/dpdk/spdk_pid70392 00:28:32.200 Removing: /var/run/dpdk/spdk_pid70467 00:28:32.200 Removing: /var/run/dpdk/spdk_pid70577 00:28:32.200 Removing: /var/run/dpdk/spdk_pid70663 00:28:32.200 Removing: /var/run/dpdk/spdk_pid70759 00:28:32.200 Removing: /var/run/dpdk/spdk_pid70822 00:28:32.200 Removing: /var/run/dpdk/spdk_pid70896 00:28:32.200 Removing: /var/run/dpdk/spdk_pid70971 00:28:32.200 Removing: /var/run/dpdk/spdk_pid71040 00:28:32.200 Removing: /var/run/dpdk/spdk_pid71149 00:28:32.200 Removing: /var/run/dpdk/spdk_pid71234 00:28:32.200 Removing: /var/run/dpdk/spdk_pid71329 00:28:32.200 Removing: /var/run/dpdk/spdk_pid71397 00:28:32.200 Removing: /var/run/dpdk/spdk_pid71476 00:28:32.200 Removing: /var/run/dpdk/spdk_pid71547 00:28:32.200 Removing: /var/run/dpdk/spdk_pid71620 00:28:32.200 Removing: /var/run/dpdk/spdk_pid71723 00:28:32.200 Removing: /var/run/dpdk/spdk_pid71814 00:28:32.200 Removing: /var/run/dpdk/spdk_pid71958 00:28:32.200 Removing: /var/run/dpdk/spdk_pid72231 00:28:32.200 Removing: /var/run/dpdk/spdk_pid72273 00:28:32.200 Removing: /var/run/dpdk/spdk_pid72704 00:28:32.200 Removing: /var/run/dpdk/spdk_pid72894 00:28:32.200 Removing: /var/run/dpdk/spdk_pid72989 00:28:32.200 Removing: /var/run/dpdk/spdk_pid73100 00:28:32.200 Removing: /var/run/dpdk/spdk_pid73154 00:28:32.200 Removing: /var/run/dpdk/spdk_pid73174 00:28:32.200 Removing: /var/run/dpdk/spdk_pid73509 00:28:32.200 Removing: /var/run/dpdk/spdk_pid73558 00:28:32.200 Removing: /var/run/dpdk/spdk_pid73631 00:28:32.200 Removing: /var/run/dpdk/spdk_pid74020 00:28:32.200 Removing: /var/run/dpdk/spdk_pid74165 00:28:32.460 Removing: /var/run/dpdk/spdk_pid74970 00:28:32.460 Removing: /var/run/dpdk/spdk_pid75102 00:28:32.460 Removing: /var/run/dpdk/spdk_pid75266 00:28:32.460 Removing: /var/run/dpdk/spdk_pid75358 00:28:32.460 Removing: /var/run/dpdk/spdk_pid75644 00:28:32.460 Removing: /var/run/dpdk/spdk_pid75886 00:28:32.460 Removing: /var/run/dpdk/spdk_pid76222 00:28:32.460 Removing: /var/run/dpdk/spdk_pid76400 00:28:32.460 Removing: /var/run/dpdk/spdk_pid76591 00:28:32.460 Removing: /var/run/dpdk/spdk_pid76644 00:28:32.460 Removing: /var/run/dpdk/spdk_pid76937 00:28:32.460 Removing: /var/run/dpdk/spdk_pid76964 00:28:32.460 Removing: /var/run/dpdk/spdk_pid77018 00:28:32.460 Removing: /var/run/dpdk/spdk_pid77247 00:28:32.460 Removing: /var/run/dpdk/spdk_pid77451 00:28:32.460 Removing: /var/run/dpdk/spdk_pid77712 00:28:32.460 Removing: /var/run/dpdk/spdk_pid77981 00:28:32.460 Removing: /var/run/dpdk/spdk_pid78272 00:28:32.460 Removing: /var/run/dpdk/spdk_pid78788 00:28:32.460 Removing: /var/run/dpdk/spdk_pid78913 00:28:32.460 Removing: /var/run/dpdk/spdk_pid78990 00:28:32.460 Removing: /var/run/dpdk/spdk_pid79354 00:28:32.460 Removing: /var/run/dpdk/spdk_pid79411 00:28:32.460 Removing: /var/run/dpdk/spdk_pid79698 00:28:32.460 Removing: /var/run/dpdk/spdk_pid79979 00:28:32.460 Removing: /var/run/dpdk/spdk_pid80321 00:28:32.460 Removing: /var/run/dpdk/spdk_pid80433 00:28:32.460 Removing: /var/run/dpdk/spdk_pid80475 00:28:32.460 Removing: /var/run/dpdk/spdk_pid80528 00:28:32.460 Removing: /var/run/dpdk/spdk_pid80578 00:28:32.460 Removing: /var/run/dpdk/spdk_pid80636 00:28:32.460 Removing: /var/run/dpdk/spdk_pid80823 00:28:32.460 Removing: /var/run/dpdk/spdk_pid80894 00:28:32.460 Removing: /var/run/dpdk/spdk_pid80953 00:28:32.460 Removing: /var/run/dpdk/spdk_pid81015 00:28:32.460 Removing: /var/run/dpdk/spdk_pid81046 00:28:32.460 Removing: /var/run/dpdk/spdk_pid81153 00:28:32.460 Removing: /var/run/dpdk/spdk_pid81255 00:28:32.460 Clean 00:28:32.460 04:19:19 -- common/autotest_common.sh@1453 -- # return 0 00:28:32.460 04:19:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:28:32.460 04:19:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.460 04:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:32.460 04:19:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:28:32.460 04:19:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.460 04:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:32.460 04:19:19 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:32.460 04:19:19 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:32.460 04:19:19 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:32.460 04:19:19 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:28:32.460 04:19:19 -- spdk/autotest.sh@398 -- # hostname 00:28:32.460 04:19:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:32.720 geninfo: WARNING: invalid characters removed from testname! 00:28:59.258 04:19:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:59.258 04:19:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:01.161 04:19:48 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:03.064 04:19:50 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:05.014 04:19:52 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:06.914 04:19:54 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:08.810 04:19:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:08.810 04:19:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:08.810 04:19:56 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:08.810 04:19:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:08.810 04:19:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:08.810 04:19:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:08.810 + [[ -n 5028 ]] 00:29:08.810 + sudo kill 5028 00:29:09.075 [Pipeline] } 00:29:09.090 [Pipeline] // timeout 00:29:09.095 [Pipeline] } 00:29:09.109 [Pipeline] // stage 00:29:09.114 [Pipeline] } 00:29:09.128 [Pipeline] // catchError 00:29:09.136 [Pipeline] stage 00:29:09.138 [Pipeline] { (Stop VM) 00:29:09.148 [Pipeline] sh 00:29:09.424 + vagrant halt 00:29:11.953 ==> default: Halting domain... 00:29:16.148 [Pipeline] sh 00:29:16.425 + vagrant destroy -f 00:29:18.952 ==> default: Removing domain... 00:29:19.527 [Pipeline] sh 00:29:19.802 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:19.810 [Pipeline] } 00:29:19.823 [Pipeline] // stage 00:29:19.829 [Pipeline] } 00:29:19.841 [Pipeline] // dir 00:29:19.846 [Pipeline] } 00:29:19.859 [Pipeline] // wrap 00:29:19.865 [Pipeline] } 00:29:19.876 [Pipeline] // catchError 00:29:19.884 [Pipeline] stage 00:29:19.886 [Pipeline] { (Epilogue) 00:29:19.898 [Pipeline] sh 00:29:20.175 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:25.481 [Pipeline] catchError 00:29:25.483 [Pipeline] { 00:29:25.496 [Pipeline] sh 00:29:25.775 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:25.775 Artifacts sizes are good 00:29:25.783 [Pipeline] } 00:29:25.800 [Pipeline] // catchError 00:29:25.813 [Pipeline] archiveArtifacts 00:29:25.820 Archiving artifacts 00:29:25.951 [Pipeline] cleanWs 00:29:25.963 [WS-CLEANUP] Deleting project workspace... 00:29:25.963 [WS-CLEANUP] Deferred wipeout is used... 00:29:25.968 [WS-CLEANUP] done 00:29:25.970 [Pipeline] } 00:29:25.985 [Pipeline] // stage 00:29:25.991 [Pipeline] } 00:29:26.005 [Pipeline] // node 00:29:26.010 [Pipeline] End of Pipeline 00:29:26.047 Finished: SUCCESS